Last Update 1:14 PM October 15, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Tuesday, 15. October 2024

Tokeny Solutions

RWA and DePIN: The Future of Assets and Infrastructure

The post RWA and DePIN: The Future of Assets and Infrastructure appeared first on Tokeny.
Blog 15 October 2024 RWA and DePIN: The Future of Assets and Infrastructure What is RWA?

In the blockchain world, Real World Assets (RWA) refer to tangible, physical assets with economic value, such as real estate, gold, vehicles, and art. Tokenizing these assets offers three main benefits: it opens the door for more people to invest by lowering barriers to entry, enables easy transferability—similar to sending a PayPal transaction—and allows the assets to be used in decentralized finance (DeFi) applications, such as providing liquidity in an AMM or using them as collateral to borrow tokenized cash.

To fractionize the ownership of these RWA, it often turns the assets into financial instruments. Typically, this involves creating an investment vehicle like a Special Purpose Vehicle (SPV) to hold the underlying asset. Tokenization is the process of representing ownership of financial instruments such as shares or debts of the SPV as tokens on a blockchain, allowing for digital purchase, self-custody, easy transfers, and usage of assets. These tokens represent securities and must comply with strict regulatory rules, only qualified investors meeting regulatory conditions can trade and hold them.

In most cases, ERC-20 standard should not be used for tokenizing RWA as ERC-20 tokens are permissionless, allowing the transfer to anyone without restriction. However, bearer instruments are illegal in most jurisdictions. This is where permissioned tokens using the ERC-3643 standard become vital. They ensure that only qualified users can hold them, which is crucial for compliance with regulations.

RWA market is one of the fastest growing markets in the blockchain industry that has reached an all-time high of $12 billion tokenized, according to a Binance Research report. However, this figure doesn’t fully capture the market’s scale.

“At Tokeny alone, we’ve facilitated the tokenization of more than $32 billion worth of assets onchain.”

Shurong Li, Head of Marketing at Tokeny

“At Tokeny alone, we’ve facilitated the tokenization of more than $32 billion worth of assets onchain.” Shurong Li Head of Marketing

Many of our clients choose not to make their data publicly available, as these are often private assets. Additionally, large institutions face challenges in accepting onchain cash due to regulatory uncertainty and they still prefer to invest in fiat. Given the current scale, we expect this market to grow significantly in the coming years.

What is DePIN?

Decentralized Physical Infrastructure Networks (DePIN) is an emerging concept where decentralized networks are used to manage and operate physical infrastructure. These networks include cloud services, wireless networks, sensor networks, mobility and energy networks. DePIN networks incentivize individuals to contribute to the bootstrapping phase of growth without relying on outside resources. This means individuals are incentivized by tokens to build up the supply of infrastructure without the need for centralized operators. DePIN addresses the issue that traditional centralized infrastructure, operated by corporations, requires a significant investment of time and money for both building and maintaining infrastructure, making it nearly impossible for individuals to build networks.

The main drive of DePIN systems is for Web3 companies to outsource all the building and maintenance of these network infrastructures. Take Hivemapper, for example, a decentralized digital map of the world (sensor networks). They provide users, known as “mappers” with a dashcam to drive around and capture real life images of everything they pass. This is one of the methods to build and maintain the infrastructure of this network. The incentive for the individuals contributing is earning tokens that hold monetary value, which can be redeemed to access premium map data and participate in governance decisions. The more a user contributes, the more infrastructure is built and maintained, the more tokens they receive as incentive.

What is the Difference Between the Two?

Although RWAs and DePIN both interact with the physical world, they have different use cases and operate in distinct ways. These differences include their purpose, the markets they operate in, the regulations involved, and the concept of ownership versus contribution.

RWA operates in the financial sector, involving tangible real-world assets like real estate, gold, or art that are converted into tokens representing fractionalized ownership. These tokens can be bought, sold, and traded among authorized investors.

“To ensure compliance, RWAs must strictly follow regulations, often using permissioned tokens such as ERC-3643.”

Luc Falempin, CEO of Tokeny

“To ensure compliance, RWAs must strictly follow regulations, often using permissioned tokens such as ERC-3643.”

Luc Falempin, CEO of Tokeny

The goal of RWA is to democratize the investment and ownership of physical assets, making them more accessible to a wider range of investors through tokenization.

In contrast, DePIN focuses on decentralizing the construction and maintenance of networks in the infrastructure sector. Instead of tokenizing existing assets, DePIN networks incentivize individuals to contribute physical resources such as server hosting, energy storage, and data collection. Contributors earn tokens that often provide exclusive benefits and hold monetary value in exchange for their participation. DePin faces fewer regulatory challenges since it involves contributions to infrastructure rather than ownership of assets.

On the other hand, both RWA and DePIN require onchian identity management. For RWA, onchain identity ensures compliance by verifying KYC status and guarantees unlosable ownership. Tokenized RWAs also have their onchain identity, allowing for enriching the data associated with the assets themselves. In the case of DePIN, without robust verification of devices or service providers contributing to the network, there’s a risk of payouts being claimed fraudulently, which can harm the network’s performance. This makes decentralized identity (DID) frameworks crucial for DePIN as well.

The ONCHAINID, an open-source DID framework used in ERC-3643, is an excellent solution. A verifier can conduct necessary checks and issue verifiable credentials as proof. This ensures that only properly functioning devices and valid contributions to the network are recognized, maintaining the integrity and sustainability of the system and enhancing its overall performance.

What is the Opportunity to Make the Two Work Together?

Combining RWA and DePIN presents a significant opportunity to transform both financial investments and infrastructure development. Together these sectors can push forward growth and innovation in the form of a hybrid ecosystem.

Tokenization of Infrastructure: Co-Ownership of DePIN Devices: The real-world assets, devices, such as renewable energy systems or critical IoT infrastructure, can be costly for individual investors. By tokenizing the ownership, for example through the units or shares of a fund that will invest in one or many DePIN devices, people can co-own multiple DePIN devices. The key benefits are listed below.

Improved Accessibility: Allowing individuals to co-own expensive infrastructure devices. This opens up investment opportunities for a wider pool of participants, making it possible for people to co-own high-value assets like solar panels or data nodes. Enhanced Transferability: Unlike physical devices, which can be difficult to sell or exchange, fractional ownership can be more easily traded. Moreover, tokenization enables peer-to-peer transfer to enhance transferability and eventually increase liquidity of the assets. New Opportunities and Stability of DePIN Network: Beyond just owning a piece of the infrastructure, the tokenized shares can also be used in DeFi applications. Investors can provide liquidity, stake, or use these tokens as collateral to generate additional financial yield, unlocking even more value from their co-ownership in the infrastructure assets while not needing to sell the devices to ensure stability of the DePIN Network. Conclusion

In conclusion, RWAs and DePIN, while distinct in their purpose, share common ground in turning the physical world digital. The opportunity to combine these concepts opens the door for innovative applications in finance, infrastructure, and decentralized economies, creating more accessible, efficient, and resilient systems for managing physical assets and infrastructure globally. As blockchain technology continues to evolve, the synergy between RWAs and DePIN could be crucial in shaping the next wave of decentralization.

SUBSCRIBE TO OUR INSIGHTS RWA and DePIN: The Future of Assets and Infrastructure 15 October 2024 AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny 24 September 2024 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets 10 September 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance 6 September 2024 ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization 3 September 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption 1 August 2024 Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization 31 July 2024 MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization 18 July 2024 Tokeny Expands Chain Support to IOTA EVM 4 July 2024 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules 28 June 2024

The post RWA and DePIN: The Future of Assets and Infrastructure appeared first on Tokeny.


Dark Matter Labs

#1 Transitioning from project to product — Day 3

#1 Transitioning from project to product — Day 3 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through intro
#1 Transitioning from project to product — Day 3

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

Day 3: Product scoping — balancing strategy and feasibility

On the third and fourth days of the workshop, we started sketching wireframes based on the user journey. This required merging the two scenarios we developed, creating a coherent flow, and listing out both the technical and UI/UX requirements.

Once we layed out the entire journey, we quickly realised that a significant part of what we were building was in fact quite similar to a typical booking platform. There were two parts of the system that were interdependent: 1) a booking system that allows a user to list spaces and book events, and 2) a new permissioning system that introduces alternative ways to approve bookings, allows users to be part of permissioning groups and create rules.

Due to restrictions of time and capacity (4 months of development time), we had to prioritise, which meant we had to decide which part of the system we were going to build.

Through some debate, we came up with three potential strategies and decided to choose one.

Maximise Experimentation
In this approach, we aimed to minimise the necessary development efforts, particularly for features already common in the market. By doing so, we could redirect our development capacity towards creating interactive prototypes that facilitate permissioning experiments. This included exploring scenarios such as “How would a liability group come together?” and “How would this permissioning group share decision-making responsibilities?” (Focusing on permissioning system)

Risks:

There is a risk that funders may not support this approach, as the booking system may not be fully functional. The development timeline could be delayed because the prototype is not fully specified yet, meaning the development team would need to wait until the prototype design is finalised.

Opportunities:

Focusing on experimentation benefits future pathway building especially innovation funding

Maximise Potential for Real Users
This strategy recognises the significant impact of involving real users at the end of the process. We proposed developing a functional booking system while continuing to explore the formation of liability groups and the permissioning mechanism through design workshops. (Focusing on booking system)

Risks:

Misalignment with the broader Dm identity and portfolio: investing too much time in developing features that are already available in the market may not align with Dm’s vision and strategic goals. Deadline Pressure: even if we allocate all development capacity to building a fully functioning system, we may still struggle to meet the 4-month deadline. Project objectives: We want to validate our concept around permissioning through this first phase, and we cannot do that by building a booking system. Funding risks: it depends on what kind of funding we are going for, but innovation funders will want to see the innovation.

Opportunities:

Foundation for future experimentation: Developing functional software provides Dm with solid tools and a platform for future experimentation. High-quality delivery: This approach ensures that we deliver a fully functional system, likely to perform well in assessments. User ready: If we can have real users, we can apply for other types of funding e.g. specifically for product development

Interoperable Permission Engine
In this direction, we focused on the importance of a fully functional user flow while dedicating our development efforts to creating versatile digital tools for experimentation. This includes developing an interoperable permissioning plugin compatible with existing booking systems. (Focusing on permissioning system)

Risks:

Can end up developing all the fullstack systems for the MVP. It’s uncharted territory so we might underestimate the development time needed.

Opportunities:

Can focus on building more value aligned outcomes. Can acquire a broader range of potential users than running a single platform ourselves. Can provide more dynamic types of usage by having some flexibility on scopes of entities who host the permission engine.

We ended up choosing the third option, which was suggested by our backend developer Donghun. The reason why we documented this lengthy debate and decision-making process is because it triggered a lot of critical, fundamental questions and areas for clarification.

What does product development in Dark Matter Labs look like?

Triggered by the debate around feasibility and vision, we had a chance to reflect on the tensions caused by different priorities. As a collective of individuals primarily trained in architecture, design, and policy, Dark Matter Labs as an organisation doesn’t resemble a typical tech start up. Then what does a product development journey look like in our unique context? How is the product that we are striving to develop different, and how should the development journey be adapted to work in our current team dynamics, without compromising delivery? We don’t assume that we would be able to answer these questions right now, but we document here some of the reflections that emerged in our conversations during the workshop.

How is our product different?

We all agree that Dark Matter Labs is not a tech start-up trying to make a product that responds to market demands. We are more of a strategic design and research lab interested in elucidating systemic problems and developing experimental products that can provoke, and perhaps, solve some of these fundamental issues. In recent years, we’ve moved beyond crafting narratives that provoke thought, to actually building products that do both — provoke and solve problems. Circulaw is a good example of such a product built with actual users in mind. Having developers on the team who were involved in building products like Circulaw (and other market-ready solutions) gave us the opportunity to raise critical questions.

Product development at Dm presents unique challenges and opportunities, particularly when addressing systemic issues rather than simply filling market gaps or meeting unmet needs. Can we really build a product that addresses the problem of ownership and centralised governance? How far can we go in embedding our critical (but speculative) ideas into a product? Will people even understand and appreciate it? (Even our blogs are notoriously difficult to read). Who is our primary audience or user? Building a product that requires significant upfront resources and diverse capabilities compels us to answer these questions from the outset.

Are we coding too soon?

During the workshop, we had an opportunity to reflect critically on our current approach to transitioning projects into products, particularly how this process affects developers within Dark Matter Labs. One key takeaway was the importance of having a robust paper prototyping phase to validate key concepts and hypotheses before coding begins (tensions could emerge when project holders underestimate the labour of coding — and the labour of having to re-write it). This phase, alongside thorough user interviews and testing, would help refine smaller details early on. From a developer’s perspective, it’s much easier to focus on how to build something if the what has already been clearly defined. As Donghun pointed out, getting these what questions sorted beforehand allows developers to focus on building a product with technical integrity, without worrying about shifting goals.

There are definitely advantages to loosely structured projects within Dm which have been our default pattern — the ability to adapt to changing contexts, being open to radical iteration — but product development requires a different level of investment and nature of collaboration, which in turn demands new structures and practices. Perhaps it’s useful to clearly distinguish the paper-prototype phase supported by workshops, before attempting to start building digital prototypes.

We also realised there was room for improving how strategic designers and developers work together. How can we ensure smoother handovers from concept to execution? Developers thrive when they work on projects with real-world applications — projects that go beyond one-off workshop tools and are sustained long enough to generate meaningful data for future iterations. This sense of continuity and contribution is crucial for developer growth. Ideally, we envision a scenario where designers and developers co-create provocative projects that go live to meet real user needs, operating for a sufficient time to gather the data necessary for iteration and future improvements. This way, developers get a sense of growth and contribution, knowing their work has a lasting impact.

Wrapping up the workshop and looking ahead

This concludes the documentation of our first in-person workshop focused on product scoping. It wasn’t a very structured workshop at the beginning, but we managed to build the necessary structures and processes that allowed us to move to the next stage.

Defining the horizons Deliberating on core principles and values of the product (more suggestions collected throughout the week) Designing two types of scenarios and user journeys Merging two scenarios into one user journey and sketching paper wireframes Prioritising what to develop/code Discussing pathway strategies Ideating around branding/identity Identifying questions for the future (collected throughout the workshop)

These were some of the concrete steps we took, with countless conversations in between. As we move on to the next phase of production, we hope that this documentation will serve as a template for teams that are looking to explore (digital) products — bridging strategic design and product development, and making the move towards transitioning projects into products.

Lastly, we share some questions that we identified throughout the workshop, which we have ‘parked’ for now.

Do we need everything to be decentralised? How far does decentralisation go? What kind of deliberation and decision-making model would the permissioning group adopt? E.g. consensus-based, and what is the reasoning? How do we help space stewards(permissioning group) shape the rules of the space? What kind of facilitation is needed? Will financial values be generated by spaces? How do we deal with financial value without encouraging rent-seeking behaviours? How could Horizon 1 look different from the current system (while still operating within existing systems) How to convince cities of new ways of doing things? What is “functionality” for research grant funders? And how do we best meet their expectations regarding tech products? (especially funders who are not typical product development funders)

Read Day 1

Read Day 2

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):

Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

#1 Transitioning from project to product — Day 3 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


#1 Transitioning from project to product — Day 2

#1 Transitioning from project to product — Day 2 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through intro
#1 Transitioning from project to product — Day 2

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

Day 2: Scenario and user journey building

On days two and three, we focused on developing the user journeys. The emphasis was placed on creating tangible, realistic use case scenarios which would help us identify the gaps in our concept and challenge where we might be relying too much on theory and assumptions.

We created a template that divides the system into front stage(frontend) — covering user actions and visible interfaces — and back stage(backend), which handles the behind-the-scenes logic and processes supporting these interactions. We also listed some choices for scenario building, such as types of permissions, users, and spaces.

Feel free to adapt our template

We decided to prioritise the event organiser and space stewards (space owners, managers, broader stakeholders like neighbours) and split up into two groups, with one group focusing on a scenario around a music event, which deals with pre-defined/automated permissions and an exception approval case, and the other on a food related event that deals with bespoke permissions. We chose music and food specifically as these scenarios are likely to introduce tensions or conflicts. Noise level issues would allow us to explore how we might use sensors to verify and give real-time feedback to space users, preventing the escalation of conflict, and food/cooking would allow us to dig deeper into the liability mechanisms around fire risks, safety and hygiene.

Group 1 (left) on food/cooking and Group 2 (right) on music

Through the user journey exercise, we were able to clearly distinguish the differences between the three types of permission processes: pre-defined/automated, exception-based, and bespoke permissions.

Pre-defined/automatic
When requesting permissions to use a space, users will be able to choose from an existing template of rules. Template A might be suitable for loud music events with more than 50 people, template B might be suitable for small cooking classes, and template C for book clubs. These templates of rules (or rulebooks) can be initially drafted based on general liability considerations (number of people, types of activity, etc), and further adapted and modified through usage in a particular space. The idea is based on a precedent-based model, similar to case law: if permission has been granted previously for a kind of event without issues, similar future events will also be automatically permitted. For most events that can be classified under certain types of activities, these pre-approved templates will enable instant permissions, simplifying and speeding up the booking process.

Exception-based
Exceptions are cases where users might request small exceptions which need human approval. In one of our scenarios, Pim, who was hosting a religious worship event involving music performance, wanted to request to increase the maximum noise levels allowed. This involves modifying a single clause in one of the template of rules. The request is processed by a ‘permissioning group’ — a group of people who have opted-in to act as a steward of a particular space, who have the responsibility to partake in decision-making as well as maintaining the space. The members of the permissioning group make a consensus based decision, whether to approve or disapprove this exception. They are also made aware that the adapted permission they grant will also become a template for future events.

Bespoke permissions
Bespoke permissions are reserved for rare cases where users are requesting permissions for a completely new type of activity which does not fit into any of the pre-approved templates. In our scenario, this was a proposal for a large local produce market held in a park. A request for bespoke permissions triggers a slightly more complex set of actions than the exception-based permissions. The event organiser will be prompted to construct a new template of rules based on the activity proposed. They will also fill out a self risk assessment form indicating their concerns and what they are excited about (the pros and cons). Submitting this request will trigger a notification to the permissioning group who will be given a due date to arrive at a consensus to approve or reject the new template. Once accepted, the event is permitted, and also future events with the same characteristics can use the template for automatic permission.

Principles and values

Through the user journey exercise which compelled us to sketch out the details of each action and process, we were also able to define the basic principles of the platform which reflect the underlying logic and values of our concept. They will be used to further define key concepts like the permissioning group, template of rules, feedback systems, incentives and liability mechanisms, and so on. These principles and values can be considered as version 1, which will be iterated later when we have more experience to draw upon.

Governance

Power and liability as inextricably linked: If you want to make decisions you need to share liability i.e. skin in the game

Prioritise proximity to space and physical presence (linked to shared risk and liability) Giving away power is giving away liability (which is why space owners might want to share decision making/permissioning power)

Permissioning based on precedents (like case law)

Every space starts with basic template of rules which are iterated thereafter Everything is allowed (within legal limits) until something happens to change the rules Templates need to be updated regularly (time limited templates)

Permissions are peer reviewed (e.g. permissioning group)

Permissioning group performs the role of space stewards — responsible for maintaining permission templates, approving bespoke permissions Anyone can join a permissioning group Initial permission group can be formed through a combination of invitations (based on shared liability holders) and self opt-in through shared interests Deliberations within permissioning group prioritise consensus building — through dialogic processes (rather than majority rule) Permissioning group participants are given a choice to opt-out of a particular decision

Incentives

Prioritise system level of risk and benefit sharing — to avoid rent seeking behaviours Prioritise generating system level of incentives/benefits (rather than personal/individual)

Feedback

Based on incentives and positive feedback at the system level rather than penalties and punishment at the individual level Encourage feedback on rules/permissions not people and their conduct

Technology

We adopt technology not to maximise efficiency and profit, but to enable greater flexibility and freedoms. We acknowledge that technology could be exclusionary, and while we may not be able to address this immediately, we are committed to designing systems that prioritise inclusivity and accessibility. By embracing open standards and decentralisation, we aim to create tools that empower communities rather than control them.

Read Day 1

Read Day 3

#1 Transitioning from project to product — Day 2 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


#1 Transitioning from project to product — Day 1

#1 Transitioning from project to product — Day 1 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through intro
#1 Transitioning from project to product — Day 1

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

In June 2024, we received good news from one of the many applications we submitted to develop the Re:Permissioning the City platform. This specific grant awarded by the National IT Industry Promotion Agency (NIPA) of South Korea, allowed us to spend the next 5 months developing the first digital prototype. Having spent the last 3 years developing the concept through small research grants, we were overjoyed to finally have the opportunity to start building something tangible.

Once we assembled the team, composed of three developers, a graphic designer, and four strategic designers, we gathered in London for a week-long workshop. Looking back, it was an ambitious, high-stakes plan that required turning theory into a concrete product design in a matter of 5 days. We were betting on our combined ‘collective intelligence’ to figure out this challenge together.

Day 1: Defining the problem space and scope of our intervention

Like any ‘design & innovation’ project, we started by collectively defining and narrowing down our area of intervention. We did this through discussing the problem space, our objectives and value proposition, and through defining the various ‘horizons’ of the product we were setting out to build.

Problem space

Fairness in allocation of spaces: in the case of Daegu and other public/government-owned spaces, the current process for allocating shared spaces is seen as unfair. For example, a simple first-come-first-served approach often fails to prevent hoarding of use rights (whoever has more resources to submit applications has a higher chance of gaining rights). Seen in the case of the public square in front of Seoul City Hall where right-wing Christian groups deliberately submit applications ahead of LGBTQI+ organisations to prevent them from hosting the queer festival, existing rules can be abused to discriminate against certain groups, which challenges fairness and ethics of existing governance models. Fairness in decision-making: existing governance around spaces are centralised and opaque. Either controlled directly by space owners, or rules set by intermediary organisations entrusted to manage spaces. Ordinary ‘users’ of spaces and other stakeholders (neighbours and others who have a stake) are almost always excluded from the rule-making and permissioning process. Public value captured in private wealth: we challenge rent-seeking, private ownership models, where 1) public spaces are used to generate private wealth or 2) value generated by the public e.g. rehabilitation through community activities gets captured solely by land/space owners. The focus is on ensuring that public spaces are used in a way that benefits the community rather than being a source of income for private entities. Decision-making based on individual interests: we advocate for a decentralised commons-based approach to decision-making. The use of spaces in the city is rarely a concern for the property owner alone. Rather, how spaces are used will affect third parties in positive and negative ways, and also the health of the city as a collective whole. This means decisions on how spaces are used should be made collectively, considering the public or commons’ good rather than individual or organisational interests. The idea is to create a system where the use of space benefits the broader community. Underutilisation of spaces: the current approach to managing public spaces is bureaucratic, which creates barriers to access and results in underutilisation. Even when spaces are managed by single entities (often NGOs and civil society organisations with a specific mandate), it takes a lot of resources to maximise utilisation, costs they often cannot afford. Barriers to access: it’s not easy for the average citizen to find spaces to do stuff — often, spaces are hard to find (no central database), and then there are restrictions on types of use which can be difficult to navigate. Rules are restrictive: existing rules around spaces (what you can do and not do) tend to be overly conservative, geared towards preventing potential conflicts. When people want to ask for bespoke permissions (if their activity does not fit into existing types of use), existing booking systems lack processes to easily process these requests, instead reverting to ad-hoc, off-platform negotiation (or outright rejection). We need different kinds of rules and methods of negotiation that can ‘liberate’ spatial use, to accommodate more flexible and creative uses of space.

Hypothesis

Our hypothesis is that creating a system that enables easier (and democratic) access to public space for communities will remove barriers for people wanting to organise activities that generate social/cultural capital and public value. This will result in increased civic activity in a city (especially key for cities experiencing demographic/economic/social decline), which has broader societal benefits (reduced isolation, better mental health, less division).

How is what we are building different from conventional booking systems? Why is this way of doing things better?

Democratic: it opens up decision-making/rule-making around shared spaces to a wider range of stakeholders, and by encouraging peer review/approval process, contributes to building democratic capacities. Legitimacy and consent: peer reviewed permission process allows us to gather people’s consent for activities that might not have been possible before. The net effect is that more events can happen in the city (with legitimacy) because we have a more effective way of revealing and implementing the views of the population. Mission-driven: allowing space owners and citizens to make social impact easier rather than just maximising profit from rent-seeking activities. Power distribution and liability sharing: liability and power are interlinked, which means if you have skin in the game, you get to participate in decision-making. The idea is to transition away from ‘externalities’, where negative impacts of an individual’s decisions can be displaced onto the commons. Open source: we are building an interoperable open source tool that people can fork and integrate into their existing systems. Distribution of value: financial value derived from a space (e.g. increase of property prices) is often hoarded by land/space owners. We will try to measure non-financial value generated from civic activities, as well as distribute financial value across more stakeholders.

Horizons scoping

Typically, product teams will create a product roadmap. However, we decided to take a different approach, coming from a strategic design perspective. The key difference between a product roadmap and horizons scoping is that the former is execution focused, while the latter is focused on identifying and assessing different “horizons” or stages of future opportunities, challenges, and strategic goals over a longer period of time. In practice, we adapted elements of both — focusing on describing the hypotheses we wanted to test, while leaving room for uncertainty and more radical imaginations in Horizon 3 as an intended direction of travel.

Horizon 0 reflects the status quo, Horizon 1 is the scope which is narrowed down considerably to fit the timeline and expectations of the 2024 prototype grant. Horizon 2 reflects what we aim to build as the first full product released to the public, and finally Horizon 3 is a description of where our ambitions lie in the future. What we managed to map out during the workshop is in no way complete — in fact the process of mapping alerted us to critical gaps, such as the question of business models and incentive mechanisms, all of which will need to be defined further. But we share this as a snapshot of our thinking at stage 1 of the development journey.

Read Day 2

#1 Transitioning from project to product — Day 1 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


HYPR

HYPR + Microsoft Entra ID External Authentication Methods

Last week, Microsoft announced the public preview of external authentication methods (EAM) for Entra ID. As a close partner, HYPR has worked extensively with Microsoft on the new offering and we are excited to be one of the first external authentication method integrations. This means organizations can now choose HYPR phishing-resistant authentication for their Entra ID MFA method, u


Last week, Microsoft announced the public preview of external authentication methods (EAM) for Entra ID. As a close partner, HYPR has worked extensively with Microsoft on the new offering and we are excited to be one of the first external authentication method integrations. This means organizations can now choose HYPR phishing-resistant authentication for their Entra ID MFA method, use it in Entra ID Conditional Access policies, Privileged Identity Management, and more.

Our goal at Microsoft Security is to empower our customers with cutting-edge security solutions. The integration of Entra ID external authentication methods with HYPR reflects this mission, providing our customers with the flexibility to employ their preferred MFA methods, including phishing resistant MFA, to defend their environments against evolving threats."

– Natee Pretikul, Principal Product Management Lead, Microsoft Security

What Are Entra ID External Authentication Methods?

The external authentication methods feature was developed to replace the current Entra ID custom controls capability. The EAM solution uses industry standards and supports an open model, and provides far greater functionality than custom controls. With EAM, organizations can use their preferred authentication provider to satisfy MFA policy requirements, managing it the same way as Microsoft-native authenticators.

Key Benefits of the HYPR and Microsoft External Authentication Methods Integration

The new integration benefits both HYPR and Microsoft customers on multiple levels.

How the HYPR Entra ID external authentication method integration works

Greater Flexibility and Choice For Your Entra ID Environments 

With the HYPR–EAM integration, organizations can seamlessly use HYPR as an Entra ID authentication method to meet multi-factor authentication requirements, without the need for federation. Unlike federation configurations, the user identity is established and managed in Microsoft Entra ID. Essentially, HYPR’s leading phishing-resistant MFA becomes a like-native authentication option in the Entra ID ecosystem, and can be invoked to satisfy MFA requirements for Conditional Access policies, Privileged Identity Management (PIM) and Identity Protection sign-in risk policies.

Consolidate and Unify Authentication Processes

Many enterprises have complex IT environments with multiple identity providers and sign-in processes. These systems operate in silos, creating security blind spots, inefficiencies, and inconsistent user experiences. By choosing a platform-agnostic solution like HYPR, organizations can use the same secure, phishing-resistant authentication across IAM systems and workflows. HYPR already provides tight integrations with Microsoft Entra ID; the new EAM feature expands that connection. It empowers organizations to further consolidate their identity security and create consistent, unified MFA experiences for their users across all Microsoft and non-Microsoft environments.

⭐ Learn more about the HYPR | Microsoft Entra ID integration.
Improve Visibility and Control

The Microsoft external authentication method integration puts some additional powerful tools into the hands of HYPR customers. Administrators and security teams can view all HYPR authentication events in the Entra ID admin center when HYPR is used as an EAM method.

Teams also can define highly granular Conditional Access controls, based on the type of authentication factor a user applies as they authenticate with HYPR. For example, access policies can vary depending on whether someone uses a fingerprint, facial recognition or PIN, to add even stronger levels of security assurance for specific use cases or resources.

HYPR as a Microsoft Entra ID External Authentication Method

Microsoft Entra ID EAM is now in public preview. Read Microsoft’s technical documentation for more details about how this feature works. Current HYPR customers looking to join the public preview should contact their customer success representative. If your organization does not yet use HYPR, but you are interested in using it as an external authentication method, talk to our team!


Thales Group

KNDS selects Thales Power Systems Solution for the Leopard 2 A8

KNDS selects Thales Power Systems Solution for the Leopard 2 A8 prezly Tue, 10/15/2024 - 09:00 KNDS awarded Thales a contract to deliver compact, programable and scalable High-Power Solid-State Power Distribution Boards (SSPDB) for the Leopard 2 A8 platform. The SSPDB developed by Thales is designed to provide overcurrent and short circuit protection, and to enable smart electr
KNDS selects Thales Power Systems Solution for the Leopard 2 A8 prezly Tue, 10/15/2024 - 09:00 KNDS awarded Thales a contract to deliver compact, programable and scalable High-Power Solid-State Power Distribution Boards (SSPDB) for the Leopard 2 A8 platform. The SSPDB developed by Thales is designed to provide overcurrent and short circuit protection, and to enable smart electrical power management of protected vehicles. A first, short-term delivery of SSPDBs will already take place in Q3 2024, followed by several hundred units by 2027.

In just a few months, KNDS and Thales engineering teams have jointly succeeded in adapting an SSPDB solution that meets the power management needs of the Leopard 2 A8. Thales will focus on making the High-Power SSPDB a successful product by concentrating on key performance areas of the SWaP (Size, Weight and Power) to meet the stringent and demanding power requirements.

Rated up to 160A per channel and with integrated current, temperature and voltage sensing, the multi-channel SSPDBs are designed to protect against overcurrent and short circuits and offer the flexibility to use pre-programmed operating profiles or real-time selections to enable intelligent power management in a variety of mission scenarios.

The first units will be integrated as early as Q3 2024. This time-critical collaboration demonstrates the ingenuity and agility of our two teams.

Under the KNDS contract, Thales will build hundreds of SSPDBs by 2027, using customization, manufacturing and testing processes already in use for the Thales Power Systems product line.

“With an expertise of more than 20 years, Thales is a global leader in the development and manufacture of Power Systems for protected vehicles. We are proud to have been awarded this contract by KNDS and are confident that this strong partnership will continue.” ​ Martin Bernhardsgrütter, Country Director, Thales Switzerland.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Generic%20banner%20option%205%20%281%29_7.png Documents [Prezly] KNDS selects Thales Power Systems Solution.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Alice Pruvot, Head of Media Relations, Aeronautics & Defense 15 Oct 2024 Type Press release Structure Defence and Security Defence Switzerland In just a few months, KNDS and Thales engineering teams have jointly succeeded in adapting an SSPDB solution that meets the power management needs of the Leopard 2 A8. Thales will focus on making the High-Power SSPDB a successful product by concentrating on key performance areas of the SWaP (Size, Weight and Power) to meet the stringent and demanding power requirements. prezly_697378_thumbnail.jpg Hide from search engines Off Prezly ID 697378 Prezly UUID 5b60cadb-573f-4f69-bc23-8162df07f5fd Prezly url https://thales-group.prezly.com/knds-selects-thales-power-systems-solution-for-the-leopard-2-a8 Tue, 10/15/2024 - 11:00 Don’t overwrite with Prezly data Off

Thales radios successfully tested by the German Armed Forces to be deployed within the NATO enhanced Forward Presence

Thales radios successfully tested by the German Armed Forces to be deployed within the NATO enhanced Forward Presence prezly Tue, 10/15/2024 - 09:00 The German Armed Forces conducted operational tests with PR4G and SYNAPS-H Thales radios to demonstrate their suitability for the needs of the multinational Battalion Group deployed by NATO. Within one year, Thales has successfully
Thales radios successfully tested by the German Armed Forces to be deployed within the NATO enhanced Forward Presence prezly Tue, 10/15/2024 - 09:00 The German Armed Forces conducted operational tests with PR4G and SYNAPS-H Thales radios to demonstrate their suitability for the needs of the multinational Battalion Group deployed by NATO. Within one year, Thales has successfully delivered to the German Armed Forces radio equipment for the NATO enhanced Forward Presence (eFP). These 4-week operational tests demonstrated that Thales radios are interoperable and secure.
@Thales

Thales radios for use in NATO enhanced Forward Presence were tested in an intensive four-week operational trial under the direction of the Army Development Office. These tests were conducted with the participation of the Army Development Office, the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support (BAAINBw), the German Army's "Test and Trial" teams and Dutch and French Armed Forces.

The particular focus of the procurement was to provide modern, encrypted, electronic counter countermeasure (ECCM)-capable command and control radios for the multinational deployment of the enhanced Forward Presence, which can transmit voice in parallel with data and their own position.

“During the four-week operational test, Thales PR4G and SYNAPS-H radios met the requirements so effectively that the system is deemed suitable for introduction into the German Armed Forces.. We are very pleased that there are no more obstacles for the operational use of the radios in Lithuania, where the deployed forces will have protected, modern radios." added Christoph Ruffner, CEO and Country Director, Thales Deutschland.

Although the soldiers had not received any training, only a short briefing, it was possible to establish operational readiness in under an hour..The radios also impressed with a stable radio network and in the range tests.

The purpose of NATO enhanced Forward Presence is to strengthen its defensive and deterrent posture on Europe's eastern flank. NATO battlegroups are deployed to the Baltic states of Estonia, Latvia and Lithuania as well as to Poland and led by the United Kingdom, Canada, Germany and the United States respectively.

/sites/default/files/prezly/images/Generic%20banner%20option%204_2.png Contacts Marion Bonnet, Press and social media manager, Security and Cyber 15 Oct 2024 Type Press release Structure Defence and Security Defence Thales radios for use in NATO enhanced Forward Presence were tested in an intensive four-week operational trial under the direction of the Army Development Office. These tests were conducted with the participation of the Army Development Office, the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support (BAAINBw), the German Army's "Test and Trial" teams and Dutch and French Armed Forces. prezly_696944_thumbnail.jpg Hide from search engines Off Prezly ID 696944 Prezly UUID e7421703-3970-4fe7-b9eb-a3a18784303e Prezly url https://thales-group.prezly.com/thales-radios-successfully-tested-by-the-german-armed-forces-to-be-deployed-within-the-nato-enhanced-forward-presence Tue, 10/15/2024 - 11:00 Don’t overwrite with Prezly data Off

Innopay

Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting

Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting from 19 Nov 2024 till 20 Nov 2024 Trudy Zomer 15 October 2024 - 08:59 Amsterdam, the Netherlands Mounaim Cortet, Vice-President of INNOPAY, will be speak
Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting from 19 Nov 2024 till 20 Nov 2024 Trudy Zomer 15 October 2024 - 08:59 Amsterdam, the Netherlands

Mounaim Cortet, Vice-President of INNOPAY, will be speaking at Mobey Forum’s Amsterdam Member Meeting, hosted by ING, on 19-20 November. The event will focus on key themes such as API monetisation, the EU’s Financial Data Access (FiDA) regulation, Embedded Finance and more.

Mounaim will share his insights on the strategic implications of FiDA, the challenges and considerations regarding FiDA schemes, and the strategic responses and opportunities for FIs. He will be joining an impressive lineup of speakers, including:

Katleen Van Gheel, Global Head of Innovation, ING Hetal Popat, Director of Open Banking, HSBC Joris Hensen, Founder and Co-Lead, Deutsche Bank API Programme Vjekoslav Bonic, Head of Digital Channels & AI, Raiffeisen Bank International AG Gijs ter Horst, COO, Ximedes Patrick Langeveld, Open Banking Expert, ING

 

This event is open exclusively to Mobey Forum members, who include industry leaders, fintech professionals and Open Banking experts. If you’re a Mobey Forum member, don’t miss this opportunity to hear from the top voices in the industry. Register now in the Mobey Forum’s Online Member Community to secure your spot.

 

 


Ontology

Revolut’s Fraud Dilemma: Why Decentralized Identity Is the Real Answer

In a world that’s rapidly shifting to digital-first everything, banks like Revolut have redefined how we manage money. Instant transfers, real-time currency exchange, seamless app experiences — all with a sleek interface. But for Jack, a business owner who had £165,000 stolen in under an hour, this digital convenience has become a nightmare. And this story highlights one glaring question: Are cent

In a world that’s rapidly shifting to digital-first everything, banks like Revolut have redefined how we manage money. Instant transfers, real-time currency exchange, seamless app experiences — all with a sleek interface. But for Jack, a business owner who had £165,000 stolen in under an hour, this digital convenience has become a nightmare. And this story highlights one glaring question: Are centralized financial systems like Revolut’s really equipped to protect us in the digital age?

Jack’s story is unsettling. It started with a simple phone call from a scammer posing as Revolut. A few security codes later, his entire business account was drained. But this wasn’t just Jack’s mistake. Revolut’s systems failed him. They didn’t flag 137 payments to three new payees in an hour as suspicious, and by the time Jack reached out, he had lost £67,000 more due to the 23-minute delay in freezing his account. Revolut has refused to refund him, and they’re not alone — 10,000 fraud reports last year flagged Revolut as the culprit, more than any major high-street bank.But what if this entire scenario could have been avoided — not with better fraud detection, but by rethinking how we handle identity verification and financial transactions altogether? Enter decentralized identity, the future of Web3 security, powered by solutions like ONT ID from Ontology.

The Case for Decentralized Identity

Revolut, like most traditional financial systems, uses centralized identity systems to verify who you are. This means your personal information — passwords, codes, biometric data — is stored and managed by a single company. If that system is compromised, as Jack’s was, you’re left exposed, and recovering your losses becomes a bureaucratic nightmare. That’s exactly what happened in Jack’s case. Fraudsters bypassed facial-recognition software and hijacked his account. The fact that Revolut didn’t even have a stored image of the fraudsters who authorized the theft shows the cracks in the system.

Decentralized identity flips this model on its head. With ONT ID, users don’t need to rely on a single institution to prove their identity. Instead, you are in control of your identity, managing it through a decentralized system that uses blockchain technology to verify your credentials securely. This self-sovereign identity model means your personal data is no longer centralized, reducing the risk of massive data breaches or fraud.

How ONT ID Could Have Prevented This

Imagine if Jack had been using a decentralized identity solution like ONT ID instead of Revolut’s traditional system. Here’s how it could have been different:

No Centralized Control: Jack’s identity wouldn’t have been stored on a vulnerable centralized server, reducing the risk of fraudsters gaining access through phishing attacks or bypassing ID verification software. Zero-Knowledge Proofs: ONT ID is able to implement a zero-knowledge proofs type system, which means Jack could have verified his identity without exposing any sensitive personal information. The scammers wouldn’t have had enough data to initiate the theft in the first place. Real-Time Security Checks: ONT ID could have flagged any unusual activity in real time through its decentralized network, potentially freezing Jack’s account the moment fraud was detected — long before 137 payments were processed. Decentralized Finance Meets Decentralized Identity

This isn’t just a problem for Revolut; it’s an issue for any centralized institution dealing with financial transactions. Fraudsters are always evolving, looking for ways to exploit these systems. Web3, with its emphasis on decentralization, offers a more secure future. With decentralized finance platforms on the rise, the integration of decentralized identity solutions like ONT ID becomes crucial.

Jack’s story is a cautionary tale of how fragile centralized financial systems can be, especially when they’re more focused on growth than security. But the solution is right in front of us: By embracing decentralized identity, we can build a future where individuals like Jack are in control of their own data and financial security. And that future isn’t some distant vision — it’s here, with tools like ONT ID leading the charge.

Ready to help build a more secure, decentralized future? With Ontology’s DID Fund, you can be part of the solution. We’re supporting innovators and developers who are driving the next generation of decentralized identity, privacy, and security.

Whether you’re passionate about blockchain, self-sovereign identity, or protecting users from fraud, the DID Fund can help you turn your ideas into reality. Apply today and join the movement to transform how we control and protect our personal data in Web3.

Start your journey at ont.id/did_fund.

Revolut’s Fraud Dilemma: Why Decentralized Identity Is the Real Answer was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 14. October 2024

SC Media - Identity and Access

Pentagon shares new cybersecurity rules for government contractors

The DOD introduced new cybersecurity requirements for companies that contract with the federal government.

The DOD introduced new cybersecurity requirements for companies that contract with the federal government.


What the US can learn from the UK and EU about regulating AI

There are ways to protect the public from the potential dangers of AI without stifling innovation – and the Europeans have already shown us how.

There are ways to protect the public from the potential dangers of AI without stifling innovation – and the Europeans have already shown us how.


Thales Group

Thales Alenia Space signs a contract with OHB to develop two radar instruments for ESA’s 10th exciting new Earth Explorer Harmony mission

Thales Alenia Space signs a contract with OHB to develop two radar instruments for ESA’s 10th exciting new Earth Explorer Harmony mission tas Mon, 10/14/2024 - 18:49 Leveraging its longstanding experience in radar-based Earth observation satellites, Thales Alenia Space will lead a wide European industrial consortium Together with data from Copernicus Sentinel-1 mission, for whic
Thales Alenia Space signs a contract with OHB to develop two radar instruments for ESA’s 10th exciting new Earth Explorer Harmony mission tas Mon, 10/14/2024 - 18:49 Leveraging its longstanding experience in radar-based Earth observation satellites, Thales Alenia Space will lead a wide European industrial consortium Together with data from Copernicus Sentinel-1 mission, for which Thales Alenia Space is prime contractor, the two-satellite Harmony constellation will provide a wealth of new information about our oceans, ice, earthquakes and volcanoes – which will make significant contributions to climate research and risk monitoring.

Milan, October 15th, 2024  - Thales Alenia Space, a Joint Venture between Thales (67%) and Leonardo (33%), has signed a contract with OHB to develop the two Earth observation Synthetic Aperture Radar (SAR) instruments to be embarked on the two-satellite Harmony constellation – ESA’s 10th Earth Explorer mission expected to be launched aboard a Vega-C launch vehicle by 2029.

Harmony ©ESA

Thales Alenia Space will lead a diverse European industrial consortium to design, develop and validate the C-Band SAR instruments and will also be responsible of the C-Band digital electronic and antenna tiles to be embarked on both Harmony satellites.

“This contract confirms Thales Alenia Space’s longstanding and recognized experience in manufacturing Earth observation satellites based on radar technology,” said Giampiero Di Paolo, Senior Vice President Observation, Exploration, and Navigation at Thales Alenia Space. “The development of the two radar instruments will allow Thales Alenia Space to make a significant technological and architectural step forward improving the competitiveness of SAR products both in the institutional and commercial Earth observation markets”.

Thales Alenia Space has played a key role as industry during the Harmony preparatory phase, supporting ESA in the definition of a high-performing solution capable of fully meeting the mission scientific objectives, developing in parallel all the relevant SAR enabling technologies.

About the 10th Earth Explorer Harmony mission

Earth Explorer missions form the science and research element of ESA’s Earth Observation FutureEO Programme. By returning critical data to understand the planet and predict what lies in store, the Earth Explorers are fundamental to advance science and, subsequently, to restore environmental balance for a sustainable future. Each of these extraordinary missions carries innovative space technology, demonstrating how new techniques can return an astonishing wealth of scientific findings about our planet.

Together with Sentinel-1, Harmony promises to provide a wealth of unique data on ocean–ice–atmosphere interactions at unprecedented resolution for more insight into upper-ocean heat exchanges, drivers of extreme weather and the long-term impacts of climate change.

The mission will also shed new light on deformation and flow dynamics at the rapidly changing edges of ice sheets for a better understanding of sea-level rise. In addition, Harmony will measure small shifts in the shape of the land caused by earthquakes and volcanic activity, thereby contributing to risk monitoring.

The Harmony mission consists of two bistatic passive Synthetic Aperture Radar (SAR) receive-only satellites, enhanced by a Thermal Infrared (TIR) optical payload, flying in a loose formation with Sentinel-1. Using Sentinel-1 as an illuminator of opportunity and augmenting its observations with a multi-static configuration for direct measurements of surface velocities will make a highly innovative contribution to Earth Observation capabilities.

ABOUT THALES ALENIA SPACE

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental management, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping to build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the parent companies’ Space Alliance, which offers a complete range of services. Thales Alenia Space posted consolidated revenues of approximately €2.2 billion in 2023. Thales Alenia Space has around 8,600 employees in 9 countries, with 16 sites in Europe and a plant in the US.

www.thalesaleniaspace.com

THALES ALENIA SPACE – PRESS CONTACTS

Tarik Lahlou
Tel: +33 (0)6 87 95 89 56
tarik.lahlou@thalesaleniaspace.com

Catherine des Arcis
Tel: +33 (0)6 78 64 63 97
catherine.des-arcis@thalesaleniaspace.com

Cinzia Marcanio

Tel.: +39 (0)6 415 126 85
cinzia.marcanio@thalesaleniaspace.com

/sites/default/files/database/assets/images/2022-10/New_Banner.jpg 15 Oct 2024 Thales Alenia Space Type Press release Structure Space Leveraging its longstanding experience in radar-based Earth observation satellites, Thales Alenia Space will lead a wide European industrial consortium Together with data from Copernicus Sentinel-1 mission, for which Thales Alenia Space is prime contractor, ... Hide from search engines Off Don’t overwrite with Prezly data Off Canonical url https://www.thalesaleniaspace.com/en/press-releases/thales-alenia-space-signs-contract-ohb-develop-two-radar-instruments-esas-10th

ESPRIT module for Lunar Gateway orbital outpost set for a significant upgrade

ESPRIT module for Lunar Gateway orbital outpost set for a significant upgrade tas Mon, 10/14/2024 - 18:49 Thales Alenia Space and ESA sign contract amendment to extend and optimize ESPRIT module Milan, October 14, 2024 – Thales Alenia Space, the joint venture between Thales (67%) and Leonardo (33%), has signed an amendment to its contract with the European Space Agency (ESA) to de
ESPRIT module for Lunar Gateway orbital outpost set for a significant upgrade tas Mon, 10/14/2024 - 18:49

Thales Alenia Space and ESA sign contract amendment to extend and optimize ESPRIT module

Milan, October 14, 2024 – Thales Alenia Space, the joint venture between Thales (67%) and Leonardo (33%), has signed an amendment to its contract with the European Space Agency (ESA) to develop the ESPRIT[1] communications and refueling module for the future Lunar Gateway orbital outpost. Worth €164 million, the amendment provides for extending and optimizing the ESPRIT module for which Thales Alenia Space in France is the prime contractor, in collaboration with OHB, alongside Thales Alenia Space in Italy and in the UK.

ESPRIT module on the Gateway ©Thales Alenia Space

The ESPRIT module is composed of two main elements: Lunar Link[2] will ensure communications between the Gateway and the Moon, while Lunar View[3] will supply the station with xenon and chemical propellants to extend its lifetime. Lunar View features a pressurized volume with six large windows, offering a 360° view on the outside of the Gateway and the Moon, and will include a logistics area for storing cargo and supplies intended for the crew.

This amendment to the ESPRIT contract provides for a significant increase in the size of Lunar View, which will now span 4.6 meters and be 6.4 meters long, with a total mass of 10 metric tons (versus 3.4 meters, 3 meters and 6 metric tons initially). This evolution is the result of NASA’s choice to launch Lunar View alongside a crewed Orion vehicle aboard the SLS Block 1B launcher, which offers more lift capacity than the launch vehicle previously planned.

In particular, the extended Lunar View will:

• Provide more storage space (6.5 m3) on-orbit and accommodate up to 1.5 metric tons of cargo at launch, thus reducing resupply flights to the Lunar Gateway;

• Enable installation of two attachment points to accommodate the Canadarm3 mobile robotic arm system, supplied by the Canadian Space Agency (CSA), for operations such as inspecting, maintaining or repairing the Gateway, assisting astronauts during spacewalks, handling science experiments in lunar orbit, or catching spacecraft visiting the Gateway;

• House the avionics suite equipment (computer, etc.) inside the module for easier maintenance and to avoid extravehicular activities if repairs are required.

These upgrades will require all of Lunar View’s subsystems to be adapted, especially the electrical power and avionics subsystems and the software and crew interface equipment.

Lunar Link is scheduled to launch in 2026 with the HALO module, while Lunar View is planned for delivery in 2029 for launch a year later, on the Artemis V mission.

“I would like to thank ESA for supporting our industry and renewing its trust in our company’s expertise,” said Hervé Derrey, CEO of Thales Alenia Space. “Thanks to the perfect complementarity of our competences in Italy and in France, we are proud to be contributing our know-how to the Artemis program and to the Lunar Gateway orbital outpost, which are set to push the boundaries of lunar exploration and pave the way for future crewed deep-space exploration missions, with Mars in sight.”

This contract consolidates Thales Alenia Space’s key role in crewed and robotic exploration of the Moon and deep space. The company is supplying critical systems for the Orion capsule’s European Service Module (ESM) and is currently developing two more pressurized modules for the Lunar Gateway: the Lunar International Habitat module (I-HAB) for ESA and the Habitation and Logistics Outpost (HALO) for Northrop Grumman. Thales Alenia Space has also signed a major contract with the Italian space agency ASI to launch the project to build the very first lunar Multi-Purpose Habitat (MPH).

Industrial contributions to the ESPRIT module

Thales Alenia Space in France is the program prime contractor. Thales Alenia Space in Italy is supplying the pressurized tunnel and windows and Thales Alenia Space in the UK is contributing to the chemical propellant refueling system, while OHB – as a main team member – is in charge of the mechanical and thermal subsystems for the non-pressurized parts of the module and the xenon refueling system. Thales Alenia Space in Belgium was selected after competitive bidding to supply the Remote Interface & Distribution Unit for Lunar Link and the Traveling Wave Tube Amplifiers. Thales Alenia Space in Spain will develop the S-band communication transponder and Thales Alenia Space in Italy the K-band transponder.

A cislunar orbital station

The Lunar Gateway orbital outpost is one of the pillars of NASA’s Artemis program to establish a sustained human presence on the Moon as a staging post for future interplanetary exploration missions. This program is an international collaboration between NASA (United States), ESA (Europe), JAXA (Japan) and CSA (Canada). The 40-metric-ton station will be assembled in space and placed in an elliptical lunar orbit. It will be equipped with a robotic arm and docking ports, and made up of habitation modules to accommodate long-duration crewed missions and provide electrical power, propulsion, logistics and communications. While not designed to be manned permanently, it will be able to support up to four astronauts for one to three months. Acquiring new experience on and around the Moon will prepare NASA to send the first humans to Mars in the years ahead, and the Lunar Gateway is set to play a vital role in this endeavor.

[1] European System Providing Refueling, Infrastructure and Telecommunications

[2] Previously HLCS (HALO Lunar Communication System)

[3] Previously ERM (ESPRIT Refueling Module)

ABOUT THALES ALENIA SPACE

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental management, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping to build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the parent companies’ Space Alliance, which offers a complete range of services. Thales Alenia Space posted consolidated revenues of approximately €2.2 billion in 2023. Thales Alenia Space has around 8,600 employees in 9 countries, with 16 sites in Europe and a plant in the US.

www.thalesaleniaspace.com

THALES ALENIA SPACE – PRESS CONTACTS

Tarik Lahlou
Tel: +33 (0)6 87 95 89 56
tarik.lahlou@thalesaleniaspace.com

Catherine des Arcis
Tel: +33 (0)6 78 64 63 97
catherine.des-arcis@thalesaleniaspace.com

Cinzia Marcanio

Tel.: +39 (0)6 415 126 85
cinzia.marcanio@thalesaleniaspace.com

/sites/default/files/database/assets/images/2022-10/New_Banner.jpg 14 Oct 2024 Thales Alenia Space Type Press release Structure Space Thales Alenia Space and ESA sign contract amendment to extend and optimize ESPRIT module Milan, October 14, 2024 – Thales Alenia Space, the joint venture between Thales (67%) and Leonardo (33%), has signed an amendment to its contract with the European Spac... Hide from search engines Off Don’t overwrite with Prezly data Off Canonical url https://www.thalesaleniaspace.com/en/press-releases/esprit-module-lunar-gateway-orbital-outpost-set-significant-upgrade

HYPR

Top 15 Cybersecurity Regulations for Financial Services in 2024

Financial services are one of the most targeted industries in the world for cyberattacks, suffering nearly 20% of all attacks in 2023. This is understandable considering the high-value outcomes of successful attacks and the fact that, despite supposed security improvements, attacks are still relatively successful, with 84% of finance organizations hit by a cyberattack going on to experi

Financial services are one of the most targeted industries in the world for cyberattacks, suffering nearly 20% of all attacks in 2023. This is understandable considering the high-value outcomes of successful attacks and the fact that, despite supposed security improvements, attacks are still relatively successful, with 84% of finance organizations hit by a cyberattack going on to experience at least one breach.

Data breaches don't just affect the institution that's compromised but also affect confidence in the sector as a whole. The International Monetary Fund has highlighted the significant threat that weak financial services cybersecurity poses to the industry and the world. Potential outcomes range from a loss of confidence in financial services to widespread economic instability.

That's why global cybersecurity regulations have been ramped up over recent years, as they strengthen the security posture of individual firms and the industry overall. Here we'll look at the most important financial services cybersecurity regulations for 2024 and beyond.

New York — NYDFS Part 500

One of the US's most important pieces of cybersecurity legislation is the New York Department of Financial Services cybersecurity bill, technically known as 23 NYCRR Part 500. Enacted in 2017, the bill affects any firm that operates under the banking, insurance or financial services laws out of New York, which are most financial services firms in the US.

It requires firms to implement a cybersecurity policy over data governance, access controls and consumer privacy. It also obligates the introduction of more robust security methods, such as the deployment of multi-factor authentication for protecting non-public information, according to the NYDFS MFA requirements.

In November 2023, it added amendments, requiring firms to: 

implement access and privilege management  institute quarterly reporting to the board by the CISO increase the scope of incident reporting to include cybersecurity events such as ransomware  administer annual risk assessments  conduct annual cybersecurity awareness training that focuses on ransomware and social engineering   conduct vulnerability management that includes annual penetration testing 

In addition, the new amendment mandates that firms implement multi-factor authentication (MFA) for remote access and privileged accounts by November 2024. 

Upcoming Compliance Requirements 

By May 1, 2025, financial institutions must review access privileges for all users with access to sensitive information. This includes automated scans of information systems to identify vulnerabilities and manual review of systems that are not covered by automated scans. 

By November 1, 2025, organizations must develop and maintain a comprehensive asset inventory of their information systems that includes key information tracking (e.g, owner, location, etc), policies for updating the asset inventory, and the procedure for disposing of information. ​

Pro tip: Consider implementing passwordless, phishing-resistant MFA, based on FIDO standards, to ensure that only cryptographically verified identities can access sensitive financial systems and prevent phishing attacks. These technologies can help companies improve compliance with stringent and evolving regulatory requirements such as NYDFS Part 500.

US — Gramm-Leach-Bliley Act (GLBA)

The GLBA has a specific Privacy of Consumer Financial Information Rule that directly affects financial services cybersecurity. This concerns non-public personal information (NPI) that a company will collect when informing about or providing a financial product or service. Fines for non-compliance can be up to $100,000 per violation and five years in prison for complicit directors.

US — Sarbanes-Oxley (SOX)

The original Sarbanes-Oxley Act was instrumental in codifying the disclosures companies must make to current or potential investors, as well as the penalties that are due for breaches (with executives being directly on the line for up to $1 million and ten years in prison). 

It has since been updated to include cybersecurity considerations. It now obligates all publicly traded companies in the US and their wholly-owned subsidiaries to declare adherence to cybersecurity best practices in areas such as authentication and data safety. They are also required to report any data breaches publicly.

Pro tip: Ensure secure employee identity proofing during onboarding by using a combination of background checks, strong authentication that includes secure cryptographic protocols and biometric validation to comply with Know Your Employee (KYE) regulations.

US — FFIEC Standards

The Federal Financial Institutions Examination Council (FFIEC) is an interagency body that sets standards for all federally supervised financial institutions, including their subsidiaries. The FFIEC cybersecurity best practices includes guidance on effective authentication and access risk management practices. The FFIEC authentication standards emphasize multi-factor authentication (MFA) as a critical security control against financial loss and data compromise, similar to the PSD2 Strong Customer Authentication mandate.

It includes references to NIST standards SP 1800-17 and SP 800-63B, which provide implementation guidelines for passwordless MFA based on FIDO specifications. In August 2024, the FFIEC announced that it will sunset its Cybersecurity Assessment Tool on August 31, 2025, and asks financial insitutions to refer directly to relevant government resources, including the NIST Cybersecurity Framework 2.0 and the Cybersecurity and Infrastructure Security Agency’s (CISA) Cybersecurity Performance Goals. 

US — FTC Safeguards Rule

The FTC Safeguards Rule requires non-banking financial institutions, such as mortgage brokers, auto dealers, and payday lenders, to implement a comprehensive security program to keep their customers’ information safe. The FTC Safeguards Rule had several new provisions that went into effect in 2023. Among the new statutes is a mandate for multi-factor authentication for anyone accessing customer information. It should be noted that this includes MFA for desktop and server access, not just applications.

US — NIST Cybersecurity Framework 2.0

The NIST Cybersecurity Framework (NIST CSF) was originally designed as a guide for businesses of all industries and sizes to manage cybersecurity risk. The newest version, the CSF 2.0, addresses the evolution of technology towards cloud migration and SaaS by adding the function of governance and a set of searchable resources for security leaders to use to make the best decisions regarding their cybersecurity

This framework is particularly relevant for financial organizations who rely heavily on SaaS technology and cloud solutions and accounts and have a vast amount of sensitive data and information that they must protect from data breaches, cyberattacks and operational failures.  

Pro tip: Implement continuous authentication to validate user identity in real-time, ensuring security throughout the entire session. This type of adaptive authentication defends against risks related to stolen credentials and unauthorized access. 

US — Executive Order on Critical Infrastructure Cybersecurity

Enacted in 2013, the Executive Order on Critical Infrastructure Cybersecurity 13636 requires federal agencies to work together with the private sector to strengthen security in critical sectors such as water, electricity and healthcare. During the global coronavirus the financial services sector was officially classified as a critical sector as it was considered essential to maintaining the nation’s economic stability. 

Organizations are encouraged to use the NIST CSF framework to align their cybersecurity risk with a strategic plan of defense. This includes information sharing, developing incident response and recovery plans, and strengthening cybersecurity resilience through measures such as MFA and threat detection. 

The mandates for 2024 and 2025 include requiring each sector to have a specific cybersecurity plan tailored to their risk and improved intelligence and threat sharing. In addition, it tasks different federal agencies with being responsible for different critical infrastructure (e.g. the Department of Energy is responsible for the security of the U.S’s energy sector). It also requires the federal government to adopt minimum security requirements and a risk-based approach to critical infrastructure.

California — California Consumer Privacy Act (CCPA)

Introduced to help protect the privacy rights and consumer protections of Californians, the CCPA affects any company which does business with Californians and meets one of the following: 

Has a gross revenue of over $25 million Buys, sells or receives personal data on 50,000 consumers Makes over half its revenue from selling consumers' personal information 

The fines can be up to $2,500 for unintentional violations and $7,500 for intentional violations, which will be multiplied per record stolen in the case of a data breach.

EU — Payment Services Directive 2 (PSD2)

The PSD2 requirement was introduced to make it easier for financial services companies to integrate and securely share data while making payment systems safer. In addition, the law set specific technical standards for strong customer authentication and improving security measures. 

The measures affect all companies catering to consumers in the EU and any payments that start, travel through or end in the EU. This puts clear obligations on financial services cybersecurity, even for firms outside the EU.

An updated version of the framework, PSD3, is currently in review. PSD3 will introduce significant changes for banks and non-bank payment service providers (PSPs), as well as consumers. The changes include new Strong Customer Authentication (SCA) regulations, with stricter rules around data access, payment protection, and authentication of users. The final version is expected to be published late 2024 and be enforceable in 2026.

EU — NIS2 Directive

NIS2, or the Network and Information Security Directive 2, is an updated regulation from the European Union designed to strengthen cybersecurity across multiple industries. It will become law on October 17, 2024. NIS2 expands on the original NIS Directive by widening its scope and imposing stricter rules on security practices and incident reporting, with stiffer penalties for non-compliance.

Under NIS2, entities in sectors like energy, finance, transport, healthcare and manufacturing must implement strong cybersecurity protocols. These include effective risk management, strong authentication and access protocols, real-time threat monitoring, and rigorous incident reporting standards.

Importantly, the directive specifies the use of multi-factor authentication (MFA) and continuous authentication to protect network and information systems. NIS2 impacts not only major financial institutions, but also smaller financial entities, payment services, and digital wallets.

HYPR saves customers millions of dollars, with a 324% ROI. Read the Forrester report.

EU — Digital Operational Resilience Act (DORA) 

In response to increasing numbers of cybersecurity attacks and operational disruption after the financial crisis of 2018, the Digital Operational Resilience Act (DORA) is targeted towards increasing the resilience of the financial sector for businesses in the European Union and those dealing with EU-based customers.

It includes authentication and access control requirements for Information and Communication Technology (ICT) systems, which the financial industry in particular is increasingly relying on for the outsourcing of services that deal with sensitive data. DORA is aimed at helping to defend against the unauthorized access of malicious actors to this sensitive data that could lead to data breaches, security incidents, and operational disruptions.

EU — General Data Protection Regulation (GDPR)

All companies processing the data of European Union citizens are affected by the GDPR. The law determines how data is used and protected and governs how consent must be used for collecting it. Along with data usage, timely reporting of breaches is also obliged if it affects EU citizens.

For financial services cybersecurity, adhering to GDPR is essential. Failure to do so can lead to fines of $20 million or 4% of global revenue, with Amazon receiving the biggest fine so far of $888 million.

UK — Data Protection Act

After the UK left the EU, it kept the GDPR which it passed into law as the Data Protection Act (2018). It is roughly the same as the EU-GDPR (just amended for UK citizens) but still carries the same requirements around data safety, consent and reporting, and fines for non-compliance.

Global - Payment Card Industry Data Security Standard (PCI DSS)

The PCI DSS covers the processors of payments from major credit and debit card companies. To achieve compliance, financial services cybersecurity programs must meet several obligations, such as protecting cardholder data, encrypting data in storage and transmission, and authenticating access to all system components. Breaches of the PCI DSS may result in fines and restrictions in using major credit cards.

The latest version of PCI DSS 4.0 requires strong authentication requirements specifically related to passwords and MFA. Passwords now have stricter specifications(e.g., resetting them every 90 days) and MFA requirements have extended beyond administrators accessing the cardholder data environment (CDE) to all types of system components, including cloud, hosted systems, on-premises applications, network security devices, workstations, servers and endpoints.

Pro tip: Ensure compliance with standard 8.3.3 by using automated, high-assurance identity verification methods when resetting user credentials / authentication factors. This standard requires user identity verification before modifying authentication to prevent attacks that target this reset process.

Singapore — Monetary Authority of Singapore Notices on Cyber Hygiene

The Monetary Authority of Singapore (MAS) regulates financial institutions in the banking, capital markets, insurance and payments sectors. The MAS has issued a collection of notices on cyber hygiene, which are a set of legally binding requirements that financial institutions must take to mitigate the growing risk of cyberthreats.

The cyber hygiene notices cover six key areas, which include securing administrative account access, regular vulnerability patching and mitigation controls for systems that cannot be patched, written and regularly tested security standards, perimeter defense systems, malware protection and multi-factor authentication for any system used to access critical information.

Other — Various U.S. State Biometric Laws

Multiple U.S. states have biometric privacy laws — such as the Illinois Biometric Information Privacy Act (BIPA) — that affect any company doing business with a resident of that state. These laws regulate collection and storage of biometric information, such as face scans, fingerprints, or voiceprints. The statutes point out that biometric identifiers are different from other types of sensitive information as they are biologically unique to the individual, and cannot be changed once compromised.

Consequences of Non-Compliance with Financial Cybersecurity Regulations

When businesses fail to comply with these financial cybersecurity regulations, they are subject to monetary penalties, increased regulatory scrutiny, and a higher risk of cybersecurity incidents. For example, the fines for NYDFS non-compliance can be $250,000 a day for ongoing non-compliance. These penalties and security incidents due to non-compliance also affect customer trust and the value of the brand. In 2022, Uber’s stock went down by 5% after its third data breach in three months. 

Along with operational disruption and a loss in revenue, cybersecurity incidents may result in legal action months or even years after the incident, as in the case with the class action suit against CDK consumers from the MOVEit data breach

Achieve Regulatory Compliance with Identity Assurance

The financial services sector is at high risk of cyberattacks due to the value of successful data breaches or account takeover attacks. To combat this, state, national and supranational governments and industry groups have introduced several financial services cybersecurity regulations to ensure best practice is deployed throughout the industry. 

A common thread throughout much of the financial services cybersecurity regulations worldwide is the protection of data and stronger identity security systems. Financial services organizations globally, including two of the top four banks, rely on HYPR  to secure their systems and achieve regulatory compliance.

HYPR combines FIDO2 passwordless MFA, continuous adaptive risk response and automated identity verification to secure finance organizations while improving user experience. Learn more about HYPR’s security certifications and how our identity assurance platform helps you comply with financial cybersecurity regulations worldwide.

Key Takeaways

Updates To Cybersecurity Regulations: Regulations are becoming more stringent across various frameworks, requiring frequent audits, vulnerability scans, and comprehensive asset inventories to improve cybersecurity and compliance. A Global Focus on Financial Cybersecurity: Regulations like GDPR, PSD2, PCI DSS 4.0, and the new EU DORA focus on data protection, strong authentication and cyber resilience. Consequences of Non-Compliance: Non-compliance with financial cybersecurity regulations can result in severe monetary penalties, reputational damage and legal action.

FindBiometrics

iDenfy Extends Reach in Crypto Market with GlolinkOTC Partnership

iDenfy has announced a partnership with GlolinkOTC, a Czech-based fiat-to-crypto exchange platform. The collaboration aims to integrate iDenfy’s suite of identity verification and compliance tools into GlolinkOTC’s platform, providing a […]
iDenfy has announced a partnership with GlolinkOTC, a Czech-based fiat-to-crypto exchange platform. The collaboration aims to integrate iDenfy’s suite of identity verification and compliance tools into GlolinkOTC’s platform, providing a […]

Neglected ‘Alpha’ Channel Opens Door to Attacks on Computer Vision Systems: UTSA Researchers

Recent research from the University of Texas at San Antonio (UTSA) has uncovered a significant vulnerability in artificial intelligence image recognition systems, including facial recognition and other computer vision applications. […]
Recent research from the University of Texas at San Antonio (UTSA) has uncovered a significant vulnerability in artificial intelligence image recognition systems, including facial recognition and other computer vision applications. […]

ID.me Share Sales May Point to IPO: Report

ID.me is enabling its employees and early investors to sell shares in a new deal that values the company at approximately $1.8 billion, reports Bloomberg. The valuation marks an increase […]
ID.me is enabling its employees and early investors to sell shares in a new deal that values the company at approximately $1.8 billion, reports Bloomberg. The valuation marks an increase […]

Datarella

Our Data Authenticity Chain

This is the third article in a series of technical posts about how Track & Trust works at a component level. The world today is full of fake news and […] The post Our Data Authenticity Chain appeared first on DATARELLA.

This is the third article in a series of technical posts about how Track & Trust works at a component level. The world today is full of fake news and dubious “facts.” Consequently, we face a significant challenge in verifying the accuracy of the data we receive. Moreover, a major part of this challenge is identifying the source of this data. We can’t predict who the end users of the Track & Trust system will be or exactly what they will want to communicate, which makes this task even more difficult. To address this issue, we must ensure that data entering our system are valid. This post explores how the “Trust” part of Track & Trust works. It explains exactly how we maintain the chain of data authenticity.

Quick navigation links to the follow-up articles will be provided at the bottom of each article once the series is complete. For now, let’s jump in.

Establishing a foundation for the data authenticity chain

We designed our system to accommodate key requirements that establish a foundation for data authenticity. Specifically, our goal was to create a flexible system. This system can work with any logistics company, regardless of their internal processes. Notably, we achieved this flexibility, which is a key benefit of Track and Trust. This allows us to collaborate with a wide range of partners. Furthermore, logistics companies can increase the number of data points they receive about their shipments from the field by using Track & Trust.

This, in turn, enables them to achieve probabilistic 360° supply chain tracking. Our team structured the Track & Trust data to integrate easily into any logistics database. In particular, we use a series of linked cryptographic signatures and blockchain transactions to create this data authenticity chain. Finally, this chain of custody has a specific purpose. It ensures that we can authenticate and validate offline events once they reach our servers.

How does the data authenticity chain work?

TLDR: We leverage APIs to take inputs from our customers (Logistics Firms) as well as to give them valuable probabilistic 360° supply chain tracking data back. For demonstration purposes we have built a front-end website to make the system tangible but the magic happens via our swagger API.

The processes surrounding our data authenticity chain are pretty technical. To make it easier to understand we’ve formated the workflow into a sequence diagram that anyone can understand.

In summary, our data authenticity chain is simply a way of validating, recording and making messy data from the field trustworthy. Once that’s accomplished leverage our blockchain toolkit to make those data immutable and highly tamper resistant. It’s a chain of custody for that data that includes built-in proof of origin. This, in turn, enables traceability and trust beyond the current state of the art.

Our next post will cover all of the ways that we can view this information. We’ll also be covering the orchestration systems operating in the background that enable us to do over the air updates to the hardware.  There will be dashboards, monitoring and CI/CD galore for your perusal.

<< Previous Post

Next Post >>

The post Our Data Authenticity Chain appeared first on DATARELLA.


KuppingerCole

Guardians Under Pressure: Mental Health in the World of Cybersecurity

by Warwick Ashford In today’s hyper-connected world, cybersecurity professionals protect organizations from increasingly complex threats. While essential for safeguarding data and digital infrastructures, this work often takes a mental toll. Pressures arise from regulatory demands, business expectations, law enforcement interactions, cybercriminals, and IT complexity. Regulatory Pressures and C

by Warwick Ashford

In today’s hyper-connected world, cybersecurity professionals protect organizations from increasingly complex threats. While essential for safeguarding data and digital infrastructures, this work often takes a mental toll. Pressures arise from regulatory demands, business expectations, law enforcement interactions, cybercriminals, and IT complexity.

Regulatory Pressures and Compliance

Compliance with regulations like GDPR, HIPAA, and PCI DSS requires constant monitoring and attention to detail. The consequences of non-compliance heighten anxiety for professionals responsible for ensuring strict adherence.

Business Demands and Pace of Work

Cybersecurity teams face constant pressure as businesses drive digital transformation. Balancing business goals with preventing vulnerabilities leads to exhaustion. The demand to "do more with less" and justify security investments adds stress, especially when prevention's value is hard to quantify.

Law Enforcement and Criminal Activity

Collaborating with law enforcement and combating cybercriminals, including organized crime and state actors, brings additional stress. Investigating breaches and countering these threats can take a psychological toll.

Technological Complexity and Uncertainty

The fast-evolving tech landscape requires continuous learning. The unpredictability of threats and managing complex systems lead to burnout and self-doubt, increasing pressure to stay ahead of attackers.

Day-to-Day Cybersecurity Operations

Cybersecurity professionals also manage daily tasks like network monitoring and incident response. The constant vigilance and high task volume often lead to cognitive overload, disrupting work-life balance and causing fatigue.

A Call to Address Mental Health

The mental health challenges facing cybersecurity professionals are significant. Organizations must address these challenges and provide support. This important issue will be discussed at KuppingerCole’s Cyberevolution 2024 conference in Frankfurt, Germany, from 3–5 December.

Addressing mental health is key to fostering a resilient workforce. Recognizing this helps protect both digital infrastructures and the professionals who defend them. Providing realistic workloads, work-life balance, and destigmatizing mental health is essential for a sustainable workforce.

At cyberevolution 2024, speakers on this topic include Sarb Sembhi, CTO at Virtually Informed; Jasmine Eskenzi, Co-Founder & CEO of The Zensory; Inge van der Beijl, Director Innovation at Northwave Investigation and Innovation; and Hermann Huber, CISO at Hubert Burda Media.

They will be addressing topics such as Cyber mindfulness: Harnessing mindfulness to combat social engineering attacks and empower the cyber workforce of the future, Cybersecurity and mental health: Navigating crisis Impact, and Stress, burnout and declining motivation in cybersecurity! There will also be a panel discussion on Addressing mental health Challenges in cybersecurity

Sunday, 13. October 2024

KuppingerCole

Going Beyond Identity: A Deep Dive into Zero Trust Security

Matthias and Alejandro discuss the concept of Zero Trust, emphasizing its importance in modern cybersecurity. They explore the core principles of Zero Trust, including continuous monitoring, data protection, and the common misconceptions surrounding it. The discussion highlights the significance of automation and orchestration in enhancing security measures and provides real-world examples of succ

Matthias and Alejandro discuss the concept of Zero Trust, emphasizing its importance in modern cybersecurity. They explore the core principles of Zero Trust, including continuous monitoring, data protection, and the common misconceptions surrounding it. The discussion highlights the significance of automation and orchestration in enhancing security measures and provides real-world examples of successful Zero Trust implementations. The conversation concludes with insights into future trends and the evolving nature of cybersecurity threats.




FindBiometrics

Luxand Marks NIST Testing Success, Launches Upgraded SDK

Luxand, Inc. is celebrating the latest success of its FaceSDK in the National Institute of Standards and Technology (NIST) Face Recognition Vendor Test (FRVT). In the Visa-Border Combination test, FaceSDK […]
Luxand, Inc. is celebrating the latest success of its FaceSDK in the National Institute of Standards and Technology (NIST) Face Recognition Vendor Test (FRVT). In the Visa-Border Combination test, FaceSDK […]

Saturday, 12. October 2024

FindBiometrics

UK Fintech Firm Implements Random Selfie Checks to Fight Fraud

ANNA Money, a fintech company claiming a customer base of more than 100,000 small and medium-sized enterprises (SMEs) in the United Kingdom, has announced biometric re-authentication measures to strengthen security […]
ANNA Money, a fintech company claiming a customer base of more than 100,000 small and medium-sized enterprises (SMEs) in the United Kingdom, has announced biometric re-authentication measures to strengthen security […]

Friday, 11. October 2024

SC Media - Identity and Access

Experts say MFA is no longer enough for enterprises

The UK’s cyber watchdog says that companies need to be more mindful with how they handle their multi-factor authentication.

The UK’s cyber watchdog says that companies need to be more mindful with how they handle their multi-factor authentication.


FindBiometrics

AI Update: High Demand, Big Losses, and Grave Warnings

Welcome to the newest edition of ID Tech’s AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: OpenAI may not make a profit […]
Welcome to the newest edition of ID Tech’s AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: OpenAI may not make a profit […]

TBD on Dev.to

Known Customer Credential Hackathon

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications. Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to is

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications.

Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to issue a Known Customer Credential (KCC) to a customer, Alice, who you have already completed KYC on. You will store the JWT representing the KCC in Alice’s Decentralized Web Node so that she can present it to your business from any payment app.

Challenge Create a Decentralized Identifier (DID) and DWN to use as the Issuer. Bonus: Use the DIF community DWN instance hosted by Google Cloud.

Issue Alice a KCC that includes evidence. Note that for this challenge, you do not need to implement an actual identity verification flow.

Install the VC Protocol onto your DWN so that you can communicate with Alice’s DWN.

Obtain permission to write to Alice’s DWN by sending a GET request to:

https://vc-to-dwn.tbddev.org/authorize?issuerDid=${issuerDidUri} Store the VC JWT of the KCC as a private record in Alice’s DWN. Submit

To enter a submission for this hackathon, provide the DWN Record ID of the KCC.

Resources Alice’s DID: did:dht:rr1w5z9hdjtt76e6zmqmyyxc5cfnwjype6prz45m6z1qsbm8yjao web5/credentials SDK web5/api SDK How to create a DID and DWN with Web5.connect() Obtain Bearer DID - required to sign KCC Known Customer Credential Schema How to issue a VC with Web5 Example of issuing a KCC with Web5 Example of issued KCC How to install a DWN Protocol How to store a VC in a DWN Contact Us

If you have any questions or need any help, please reach out to us in our #kcc-hackathon channel on Discord.


FindBiometrics

ID Tech Digest – October 11, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Mets Owner Faces Class-Action Lawsuit Under […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Mets Owner Faces Class-Action Lawsuit Under […]

Spruce Systems

Fighting Election Deepfakes with Digital Identity

Discover how digital signatures can ensure the authenticity of online announcements, helping to restore trust in a world where misinformation thrives.

One of the biggest pieces of news of the 2024 U.S. Presidential election has been the July 20th announcement by President Joe Biden, made via a letter that many saw first on social media, that he was withdrawing from the race. The immediate reaction was skepticism and disbelief – an understandable reaction in an era when it seems like more and more of what we see on the internet is fake, false, or misleading. 

The fallout of this skepticism was luckily limited. However, misinformation can have major impacts on people’s behavior, and the broader mistrust it sows can be deeply toxic for an entire society. Current attempts to deal with the problem, such as by fact-checking organizations, can’t keep up, especially as generative AI makes fakes much easier to produce.

It’s time for a different way to authenticate content online, and luckily, there’s one not too far over the horizon: digital signatures based on privacy-preserving cryptography can be used to prove the real source of online content. States, including California, are testing out a state-issued digital ID, known as the mobile driver’s license (mDL), based on these digital signatures. 

Particularly for important announcements from trusted sources, trustworthy digital signatures could have a huge positive impact on the information environment, and ultimately could help rebuild the trust that has been eroded by the online free-for-all of the past decade.

Let’s explore how that could work.

The Death of Drawn Signatures

President Biden’s withdrawal announcement was made, not in a network-televised speech, but via a letter on Biden’s letterhead. The letter was distributed to news outlets but also posted to social networks, including X (formerly known as Twitter), where many commentators saw it first. This cut out key sources of trust and vetting: the authenticity of a direct spoken statement and the third-party confirmation of a news organization.

It’s little surprise, then, when some speculated that Biden’s letter might not be real. After all, Twitter accounts can be hacked, and anyone might have created the letter. Notably, skeptics cast doubt specifically on Biden’s signature – the very tool humans have used to prove the authenticity of communications for centuries, even millennia. 

Those doubts left a gap for a fake video of Biden purportedly making the announcement. That’s just one example of the fake videos, audio, and photos we’re likely to see in the coming weeks and months, as partisans engage in boundary-breaking informational warfare. 

Disinformation has always been one of the dark arts of politics, but new generative AI tools make such fakery so easy that fact-checkers can never hope to keep up. In fact, AI and automation are also empowering “bots” on social media and across the internet, which can simulate real humans’ reactions to content, misleading some victims even more severely with false “social proof.” In one worrying recent example, Russian operatives have used AI to impersonate Americans supposedly opposed to military support for Ukraine.

With the internet increasingly the center of political discussion in America and around the world, and with the most powerful politicians in the world making major announcements via social media, we need a better way to separate the fake from the real.

The Unfakeable Proof of Digital Signatures

To understand how content could be reliably associated with a real-world identity, we have to touch on a somewhat difficult topic: cryptography.

The problem with verifying content online up to now is that the infrastructure of the internet has no built-in identity system, and any digital file can be copied. That’s why digital information systems “break” traditional forms of attestation – anyone can post any file, from any location, and claim to be anyone. Not only can you copy-paste a written signature onto any document, you can now fairly effectively fake video of someone making a statement. While dedicated digital sleuths can spot impostors in various ways, it’s very difficult for amateurs.

Reliably “signing” a digital message instead relies on encryption techniques that aren’t exactly new but are still unfamiliar—digital signatures and public-key cryptography. 

In very broad terms, online public information could be reliably signed using a digital certificate issued and affirmed by a known source – possibly a driver’s license issuer, but not exclusively, as we’ll see. That certificate would then be mathematically mixed with the digitized message content, or “hashed,” to produce a string of characters that can only be matched back to that specific content-signature pair. 

That hash file would be attached to a public post, and anyone who wanted to affirm its authenticity could check that this specific content was signed by a specific person’s certification. To draw a rather abstract metaphor, it’s like signing a document with ink that contains all the letters in the document itself – a signature unique to one piece of data.

This leaves out a lot of technical detail, but what matters is that this system can’t be spoofed or broken, except by extraordinary measures, such as physically stealing certificate-signing hardware from the DMV. In the case of our election example, the President could certify, using his mobile driver’s license or other verifiable digital ID, the content in his social media statement using a digital signature and the public would be able to trust it’s authenticity.

This type of digital signature has another advantage – you don’t actually have to reveal your identity to sign content. Digital ID systems, such as mobile driver’s licenses, have what are known as ‘selective disclosure’ features, meaning you can attest only to the specific information you want. That can include simply affirming that “a human produced this content,” without disclosing your name. Or you can show that it was made by “a human from Dallas,” without disclosing your address. 

This is important to emphasize because the idea of a digital identity can initially sound oppressive or authoritarian – and it certainly can be, if implemented using authoritarian ideals. But under the right regulatory and technology framework, they can be far more privacy-preserving than current models.

Most importantly, and in sharp contrast with the most dystopian fears, you won’t even have to depend on a government agency to attest to your identity.

This is a widely-shared vision of the digital identity future, one that aligns with the values of privacy, individual freedom, and democratic choice. At the same time, it offers a vast improvement in online trust over the current status quo. 

Over the next few weeks, Americans and many others will see yet again just how flawed our online discourse is. Being able to prove who’s talking, whether President or pauper, is an obvious starting point for fixing it.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


FindBiometrics

DoD Mulls Expedited Acquisition Process for AI Tech

The Department of Defense is considering creating a dedicated acquisition pathway for artificial intelligence (AI) to streamline and expedite AI capabilities for military personnel. Federal News Network reports that Young […]
The Department of Defense is considering creating a dedicated acquisition pathway for artificial intelligence (AI) to streamline and expedite AI capabilities for military personnel. Federal News Network reports that Young […]

Thales, Badge Team Up to Secure IAM with Shared Devices

Thales has announced a new partnership with Badge, aimed at enhancing cybersecurity solutions for sectors that rely on shared devices, such as healthcare, retail, and manufacturing. By integrating Badge’s passwordless […]
Thales has announced a new partnership with Badge, aimed at enhancing cybersecurity solutions for sectors that rely on shared devices, such as healthcare, retail, and manufacturing. By integrating Badge’s passwordless […]

After New Hearing, Judge ‘Fears’ BIPA Case Against Google Will Have to Proceed

A class-action lawsuit alleging that Google misused Illinois residents’ biometric data has recently gained traction after surviving a third motion to dismiss. This case, filed by Illinois resident Steven Vance, […]
A class-action lawsuit alleging that Google misused Illinois residents’ biometric data has recently gained traction after surviving a third motion to dismiss. This case, filed by Illinois resident Steven Vance, […]

Civic

Civic Milestones & Updates: Q3 2024

Several macro economic and blockchain related events marked the third quarter of 2024. At its closing, Bitcoin was on track to finish September with a nearly 7 percent return, its best performance since 2013. In the US, the Fed lowered interest rates and indicated a steady set of rate cuts was on the horizon. Elsewhere, […] The post Civic Milestones & Updates: Q3 2024 appeared first on Civic

Several macro economic and blockchain related events marked the third quarter of 2024. At its closing, Bitcoin was on track to finish September with a nearly 7 percent return, its best performance since 2013. In the US, the Fed lowered interest rates and indicated a steady set of rate cuts was on the horizon. Elsewhere, […]

The post Civic Milestones & Updates: Q3 2024 appeared first on Civic Technologies, Inc..


KuppingerCole

Network Detection and Response (NDR)

by Osman Celik This report provides an overview of the Network Detection & Response (NDR) market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of security and compliance capabilities designed to protect cloud-native applications across the development and production lifecycle. It provides an assessment of the capabili

by Osman Celik

This report provides an overview of the Network Detection & Response (NDR) market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of security and compliance capabilities designed to protect cloud-native applications across the development and production lifecycle. It provides an assessment of the capabilities of these solutions to meet the needs of all organizations to monitor, assess, and manage these risks.

Network Detection and Response (NDR)

by Osman Celik Enhance network security with NDR solutions. Improve threat detection, compliance, and performance in complex infrastructures. Elevate your cybersecurity posture today by using this buyer's guide to help you find a solution that is right for you.

by Osman Celik

Enhance network security with NDR solutions. Improve threat detection, compliance, and performance in complex infrastructures. Elevate your cybersecurity posture today by using this buyer's guide to help you find a solution that is right for you.

Cloud Backup for AI Enabled Cyber Resilience

by Mike Small Discover how to achieve cyber resilience with robust data backup and recovery solutions, protecting against ransomware, IT failures, and regulatory challenges. Use our buyer's guide to help you find the solution that is right for you.

by Mike Small

Discover how to achieve cyber resilience with robust data backup and recovery solutions, protecting against ransomware, IT failures, and regulatory challenges. Use our buyer's guide to help you find the solution that is right for you.

Okta

How to Create a Secure CI/CD Pipeline Using Okta Terraform

Embarking on a DevOps journey can be exciting and daunting, especially for beginners. The landscape is vast, and the learning curve can feel steep. One of the most common challenges is setting up and managing a robust Continuous Integration/Continuous Deployment (CI/CD) pipeline that ensures seamless integration and delivery of code changes. This guide aims to simplify that process by walking you

Embarking on a DevOps journey can be exciting and daunting, especially for beginners. The landscape is vast, and the learning curve can feel steep. One of the most common challenges is setting up and managing a robust Continuous Integration/Continuous Deployment (CI/CD) pipeline that ensures seamless integration and delivery of code changes. This guide aims to simplify that process by walking you through setting up a CI/CD pipeline for Okta using Terraform, AWS, and GitHub Actions.

Overcoming DevOps challenges securely

Getting started with DevOps often presents a series of challenges:

Running Locally: Setting up Terraform locally involves dealing with packages, dependencies, and managing the state file, which can be cumbersome and error-prone. Collaboration: Ensuring team members can collaborate effectively requires a consistent and reproducible environment.

Making a setup production-ready introduces further complexities:

State File Storage: Knowing where and how to store the Terraform state file securely. Secrets Management: Safely storing and managing sensitive information like API keys and passwords. Automation: Automating the deployment process to ensure reliability and efficiency.

In this post, we’ll use Okta, Terraform, AWS, GitHub, and GitHub actions to create a secure CI/CD pipeline.

Table of Contents

Overcoming DevOps challenges securely CI/CD pipeline architecture using Terraform, AWS, Okta, and GitHub CI/CD workflow overview Store Terraform files in source control Connect to Okta securely using OAuth 2.0 Leveraging AWS for Terraform Backend and Secrets Management Store Terraform backend components in AWS Manage secrets securely Set up the IAM policy for the CI/CD pipeline Configure an OpenID Connect Provider in GitHub Create IAM roles for the CI/CD pipeline Use GitHub Actions to trigger Terraform commands Leverage GitHub Actions for the CI/CD workflow Organize the CI/CD and Terraform code files for maintainability Build the CI/CD pipeline using Terraform and Okta Set up source control branches for Terraform code files Finalize Terraform configuration Connect Terraform code to Okta resources GitHub Actions triggers Terraform dev build GitHub Actions trigger Terraform prod plan GitHub Actions trigger Terraform prod build Learn more about Okta, Terraform, CI/CD patterns, and OAuth 2.0

By the end of this post, you’ll have a solid understanding of how to set up a CI/CD pipeline tailored for Okta and the knowledge to start implementing infrastructure as code with Terraform.

Let’s dive in and take the first step towards mastering DevOps with a practical, hands-on approach!

Prerequisites

You’ll need the following tools installed on your local machine. Follow the installation instructions through the provided links.

IDE with a Terraform plugin, such as Visual Studio Code or IntelliJ IDEA

Choosing the proper Integrated Development Environment (IDE) with a Terraform plugin is crucial for an efficient and error-free workflow. Some essential features to look for in your IDE:

Variable Declaration Warnings: If your Terraform module requires certain variables, the IDE will alert you when any required variables are not declared. Resource Declaration Assistance: When you declare a resource, the IDE will warn you if any required attributes are missing and suggest attributes to add. Resource and Attribute Autocompletion: The IDE will autocomplete resource names and attributes when referencing other resources, saving time and reducing errors.
Git Terminal window

You’ll need the following accounts:

Okta Workforce Identity Cloud Developer Edition account GitHub account and a GitHub organization account (You can create a free GitHub organization if you don’t have access to one) A free AWS account CI/CD pipeline architecture using Terraform, AWS, Okta, and GitHub

It is essential to understand the key components and their roles in the CI/CD process. This integration of GitHub, Terraform, AWS, and Okta allows for secure and efficient infrastructure management and deployment. The following overview details each component and its function.

User

Develop Code: Develops Terraform code on their local machine using a preferred IDE. Uses Git to push code to the GitHub repository.

GitHub Repository

Code Storage: Stores the Terraform configuration code. Triggers Workflow: GitHub Actions checks out code that automates builds using Terraform based on events within the GitHub repository (e.g., push to branches, pull requests, etc.).

GitHub Actions

Workflows: Workflows are automatically triggered by GitHub repository events and execute the necessary commands to integrate with AWS and Terraform. AWS: Assume Role: Integrates with AWS IAM STS via GitHub OIDC IdP to authenticate and assume roles with web identity. Temporary Credentials: Utilizes temporary credentials returned from AWS IAM STS for Terraform backend operations. Terraform: Runs Terraform commands to manage infrastructure.

Terraform

State Management: S3: Utilizes S3 for storing Terraform state files. DynamoDB: Uses DynamoDB for state locking to ensure consistency and prevent concurrent operations. Secrets Management: Retrieves Okta OAuth2 client credentials private key from AWS Secrets Manager for authentication and authorization to Okta management APIs. Okta: Resource Management: Leverages Okta APIs via the Terraform Okta provider to manage resources. CI/CD workflow overview

At a high level, this is what we aim to build out through this article. We’ll set up a CI/CD pipeline that automates infrastructure deployment using GitHub, Terraform, AWS, and Okta. Here’s a simplified overview of the workflow:

Branch Creation: Developers create and work on a develop branch. Push to Develop: Code changes are committed locally and pushed to the remote develop branch. Dev Build: GitHub Actions run Terraform commands to deploy to the development environment. The push to develop automatically triggers this. Pull Request to Main: A pull request is made from develop to main for code review. Any GitHub Action workflow executions are included in the pull request for review. Prod Plan: GitHub Actions preview changes for the production environment. This is triggered automatically by the pull request to main, and it lets pull request reviewers validate potential changes before modifying the production environment. Merge to Main: The pull request is approved and merged into the main branch. Prod Build: GitHub Actions runs Terraform commands to deploy to the production environment. The merge to main automatically triggers this. Store Terraform files in source control

We’ll use GitHub as our code repository and GitHub Actions for our CI/CD workflows, so you’ll need a GitHub account. If you don’t have one, create one at GitHub.

You will also need a GitHub Organization. If you are an enterprise user, you likely already have one. If not, or if you’re experimenting, you can create one for free by following the GitHub Organizations instructions to start creating an Organization.

You’ll create a new repository within your GitHub Organization and then connect it to your local development environment:

Create a new repository: We created a templated repository for you to use for this guide. Follow the Creating a repository from a template instruction from GitHub and use this sample template. Select your GitHub Organization as the owner and name the repository using a structure such as {okta-domain-name}-okta-terraform (e.g., atko-okta-terraform). Ensure you set the repository to Private. This setting is crucial as the repository will run GitHub Actions workflows and have information related to your environment (e.g., AWS resource names). Clone the Repository: Once you create your repository, copy the clone link and run the following commands in the command line. Replace the variables with your GitHub username, GitHub organization, and repository name: git clone https://{your_github_username}@github.com/{your-github-organization}/{your-repository-name}.git cd {your-repository-name} Connect to Okta securely using OAuth 2.0

We will use the OAuth 2.0 client credentials flow to access Okta APIs. OAuth 2.0 is the most secure method for integrating with Okta APIs, as we can tightly bound authorizations using scopes, and access tokens are short-lived compared to the long-lived SSWS API keys. Furthermore, Okta’s Terraform provider supports OAuth 2.0 Demonstrating Proof-of-Possession (DPoP), which is an additional security mechanism to bind access tokens to a particular client through cryptography, thereby reducing the risk of token replay by a malicious actor.

The Okta OAuth client requires ‘scopes’ to interact with the management API. For this guide, we will interact with the Groups resource in Terraform and corresponding APIs. To understand the corresponding scopes related to a Terraform resource and underlying Management APIs, refer to the Okta API documentation.

Finally, the OAuth client requires an Administrator Role to make administrative changes. We will assign the Organization Administrator role as this contains sufficient permissions for the resources we manage within this build. If you intend to use Terraform to manage your environment ongoing, a Super Administrator may be required (especially for managing resources like Admin Roles). The effective permissions are a combination of the scopes permitted for the client and the Administrator Role - so even though we provide the client ‘Organization Administrator,’ if we only give access to ‘groups’ related scopes, all the client can do via the API is manage groups!

Follow these steps to set up an API Services application in Okta. Navigate to the Okta Admin Console and follow the steps to create the API services application:

Navigate to Applications > Applications and press the button to Create App Integration Select API Services and press Next Name your application (e.g., Terraform) Press Save

In the General Settings tab, find the Client Credentials section and press Edit to make the following changes:

Change the Client authentication method to Public key / Private key. In the Public Keys section, click Add key and then Generate new key. Select the PEM tab and copy the contents to a file you’ll use later. Select Done and Save

Navigate to Okta API Scopes tab and make the changes:

Find okta.groups.manage and select Grant

Navigate to the Admin roles tab and press Edit assignments. Then apply the following changes:

In the Role drop-down, select ‘Organization Administrator’, or your preferred Admin Role Select Save Changes to finish assigning the role

Repeat these steps to create an API Service Okta application and configure it for any additional environments you manage.

⚠️ Important

Do not save the private key locally. In the next steps, we will securely onboard it to secrets management.

Leveraging AWS for Terraform Backend and Secrets Management

We will utilize AWS for both the Terraform backend and Secrets Management. The Terraform backend will store state files, which track the status of your Okta environment based on previous builds. We will use the GitHub OIDC integration with AWS for Terraform authentication. This allows GitHub to authenticate with AWS using OpenID Connect (OIDC) and assume the necessary role via web identity to interact with required services. This approach eliminates the need for long-lived or persistent secrets (such as AWS access keys and secrets), ensuring a more secure setup.

Store Terraform backend components in AWS

First, let’s create the necessary components for the Terraform backend.

Create an S3 Bucket

Follow the Creating a bucket instructions from AWS to create a bucket. Name the bucket using a structure such as {okta-domain-name}-okta-terraform-state. By default, Block all public access is enabled, which ensures that your bucket contents are private, which is an integral control given that the bucket will contain information about your Okta configuration. I highly recommend enabling Bucket Versioning to version your state files. This is a valuable feature should you need to roll back to previous versions of the state. After you have created the bucket, follow the Viewing the properties for an S3 bucket instructions to navigate to the properties of the bucket and capture the ARN. The ARN will be used later to define the AWS IAM Role Policy. Lastly, we will use folders to organize your different environments’ state files. Follow the Organizing objects in the Amazon S3 console by using folders instructions to create a folder for each environment you manage (e.g. dev and prod).

Create a DynamoDB Table for State Locking

Follow the Create a table in DynamoDB instructions to create a DynamoDB table. Name the table using a structure such as {okta-domain-name}-okta-terraform-{environment} (e.g. atko-okta-terraform-dev). Set the partition key to ‘LockID’ and leave other configuration defaults. Note the table name, we will be using it later in the AWS IAM Role Policy definition. Repeat for any other environments you manage.

For more information on the AWS S3 Terraform backend, please refer to Terraform S3 Backend Documentation.

Manage secrets securely

Next, we will set up AWS Secrets Manager to securely store the private key for authentication and authorization to Okta management APIs.

Follow the Create an AWS Secrets Manager secret instructions to store the OAuth 2.0 private key(s). When configuring the secret, note this is of the secret type Other type of secret, and Plaintext. Ensure you name the secret something meaningful, as this will be referenced in your Terraform configurations, as well as AWS IAM Role Policy definition - follow a structure such as {environment}/okta-terraform-key (e.g., dev/okta-terraform-key). Since it’s a private key, keep any rotation-related configurations as default options. Once the secret has been created, copy the ARN for later use within the AWS IAM Role Policy definition. Repeat for any additional environments you manage.

Set up the IAM policy for the CI/CD pipeline

Next, we’ll create the IAM Policy definition. This policy will be used by the role that GitHub will assume via OpenID Connect (OIDC).

First, we will prepare the IAM policy JSON file. Use the following template and make the necessary replacements using the ARNs you’ve captured from the previous steps.

Replace <S3-ARN> with the ARN of your S3 bucket. This grants permission to list the bucket. You can find it under the Properties tab of the S3 Bucket. Example: arn:aws:s3:::akto-okta-terraform Replace <S3-ARN>/* with the ARN of your S3 bucket and any folder structures for respective environments. This grants permission to get and update objects in the relevant path. Alternatively, you can use a wildcard (*) for the entire bucket. Example: arn:aws:s3:::akto-okta-terraform/dev/* Replace <AWS-Region>, <Account-Number>, and <DynamoDB-Table-Name> with the AWS Region, AWS Account Number (found in the management console toolbar) and DynamoDB Table Name respectively. This grants permission to add and remove rows in the table for the Terraform state file locking process. Include any additional tables for each environment. Example: arn:aws:dynamodb:ap-southeast-2:99123456789:table/akto-okta-terraform-dev Replace <SecretsManager-ARN> with the ARN of your Secrets Manager secret. This grants permission to retrieve the secret value. Include any additional ARNs for each environment. Example: arn:aws:secretsmanager:ap-southeast-2:99123456789:secret:dev/akto_okta_terraform_key-QuqiGR { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "<S3-ARN>" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "<S3-ARN>/*" ] }, { "Effect": "Allow", "Action": [ "dynamodb:DescribeTable", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem" ], "Resource": [ "arn:aws:dynamodb:<AWS-Region>:<Account-Number>:table/<DynamoDB-Table-Name>" ] }, { "Effect": "Allow", "Action": [ "secretsmanager:ListSecrets", "secretsmanager:GetSecretValue" ], "Resource": [ "<SecretsManager-ARN>" ] } ] }

Follow the Create IAM policies documentation for instructions on creating an IAM Policy. When creating the policy document, use the JSON editor and input the JSON from the previous step. Name the policy something meaningful (e.g. ‘Okta_Terraform_Backend’).

By following these steps, you will have created an IAM policy that provides the necessary permissions for Terraform to interact securely with AWS services.

Configure an OpenID Connect Provider in GitHub

Next, we’ll configure the OIDC Identity Provider for GitHub. Follow the AWS instructions at Create an OpenID Connect identity provider in IAM.

For the Provider URL, use https://token.actions.githubusercontent.com For the Audience, use sts.amazonaws.com

For more information on integrating GitHub with AWS using OIDC, refer to the GitHub and AWS integration documentation.

Create IAM roles for the CI/CD pipeline

Finally, we’ll create an IAM Role for the GitHub OIDC Identity Provider to assume. This role will link the OIDC Identity Provider via the trusted entity and the policy via permissions.

Follow the instructions for Creating a role for OIDC from AWS. When configuring the Trusted Entity, choose Web Identity, and use the following values for the configurations:

Identity provider: token.actions.githubusercontent.com Audience: sts.amazonaws.com GitHub organization: {your_github_organization} (the unique identifier for your GitHub Organization) GitHub repository: {your_github_repository} (the name of your GitHub repository)

For permissions, choose the IAM Policy (‘Okta_Terraform_Backend’, or your name of choosing) you created earlier. Name the role something meaningful (e.g. ‘GitHub_Okta_Terraform_Backend’). Once the role has been created, copy the Role ARN. This is the only variable we need to pass to our pipeline to initialize the backend and retrieve the secret to authenticate and authorize Okta APIs — and it’s not even a secret!

By following these steps, you will have created an IAM Role that GitHub can assume via OIDC, enabling secure interactions with AWS and Okta.

Use GitHub Actions to trigger Terraform commands

GitHub Actions allows us to run our build and deployment activities using Terraform commands executed in a temporary virtual machine.

First, we must store the Role ARN and other environment variables in GitHub. To create and store variables for the GitHub repository, follow the Creating configuration variables for a repository instructions.

Store the Role ARN: Create a variable named AWS_ROLE_ARN and use the Role ARN for the value (e.g. arn:aws:iam::<Account-Number>:role/<Role-Name>). Store the Region: Create a variable named AWS_REGION and use the Region in which the AWS resources were created (e.g. ap-southeast-2). Refer to the following documentation for more details on Region names: AWS Regions Documentation

Ensure you do this at a ‘Repository’ level and not at an ‘Organization’ level, or the GitHub Actions workflows will not be able to read the variables

Leverage GitHub Actions for the CI/CD workflow

We will use multiple pre-built GitHub Actions to authenticate to AWS and run our Terraform commands. No action is required from you to configure these workflows. At a high level, the configured GitHub Actions workflows will perform the following:

GitHub Actions Runner: This action checks out your repository onto the runner, allowing you to run Terraform commands against your code. AWS Configure AWS Credentials: This action establishes an AWS session using the GitHub OIDC Identity Provider (IdP) and the Assume Role with Web Identity capability. There is no need to manage any secrets or custom scripts, as this action will handle session establishment. Terraform CLI: This action runs the Terraform commands.

For more information and to examine the code, see the github/workflows folder within the repository.

Organize the CI/CD and Terraform code files for maintainability

The high-level structure of the repository looks like this:

github/ ├─ workflows/ │ ├─ push-main.yml │ ├─ push-develop.yml │ ├─ pr-main.yml terraform/ ├─ modules/ │ ├─ {module}/ │ │ ├─ {resource}.tf │ │ ├─ variables.tf ├─ main.tf ├─ variables.tf ├─ backend-dev.conf ├─ backend-prod.conf ├─ vars-dev.tfvars ├─ vars-prod.tfvars Review the GitHub Workflows directory github/workflows/: This directory contains the GitHub Actions workflow files that define the CI/CD pipeline. push-main.yml: Workflow triggered by a push to the main branch. push-develop.yml: Workflow triggered by a push to the develop branch. pr-main.yml: Workflow triggered by a pull request to the main branch. Review the Terraform configuration files terraform/: The root directory for all Terraform configuration files. modules/: This directory contains reusable Terraform modules. {module}/: Each module has its own directory. {resource}.tf: The Terraform configuration file for specific resources within the module. variables.tf: The child module input variables definition file main.tf: The main Terraform configuration file where all providers, modules, and variables are configured. variables.tf: The parent module input variables definition file. backend-dev.conf: Configuration for the backend components for the development environment. This configuration must be passed in via CLI since named variables cannot be used directly in the backend block. backend-prod.conf: This is the configuration for the backend components in the production environment, similar to the development configuration. vars-dev.tfvars: Input variable values specific to the development environment. vars-prod.tfvars: Input variable values specific to the production environment. Build the CI/CD pipeline using Terraform and Okta

Now that we have everything set up, let’s actually build something!

First, we will need to update a few files with some of the necessary configurations relevant to your environment. Then we will create a new group in your Okta environment, using variables to declare the group name.

Set up source control branches for Terraform code files

Ensure your local repository is up-to-date with the remote main branch.

git checkout main git pull origin main

Create and switch to the branch named develop.

git checkout -b develop Finalize Terraform configuration

Now that we have checked out our code let’s finalize the configurations required for Terraform to interact with our backend, retrieve the necessary secrets, and interact with the Okta Management APIs. Open the repository in your preferred IDE to edit some files.

Backend configuration files

The Terraform backend configuration is stored within the backend-*.conf files and contain configurations relevant to your environments. Within these files, you will find placeholders for the following:

bucket - the name of your bucket (not the ARN!) key - the path to your Terraform state file (i.e. the folder and resultant file name, which defaults to terraform.tfstate) dynamodb_table - the name of your DynamoDB table (not the ARN!) region - the AWS Region

Replace all the placeholders in the backend-*.conf files. There are two placeholders for development and production environments, respectively. Refer to the following example as a reference:

bucket = "atko-okta-terraform" key = "dev/terraform.tfstate" dynamodb_table = "atko-okta-terraform-dev" region = "ap-southeast-2" Terraform variables (tfvars)

Variables are a critical component within the infrastructure as code configurations allow you to have a single set of configurations while maintaining environment-specific values. Within Terraform, one way to manage such environment-specific values is using ‘tfvars’ files. The ‘tfvars’ file contains a set of variable values specific to an environment. It is passed in via the Terraform CLI in our GitHub Actions workflow when running specific parts of the workflow.

Additional configuration-related variables stored within the vars-*.tfvars files require updates. Within these files, you’ll find placeholders for the following:

region - the AWS Region okta_org_name - the prefix value for your Okta tenant okta_base_url - the base or suffix value for your Okta tenant okta_scopes - the scopes for the Terraform Okta OAuth 2.0 client application okta_client_id - the client ID for the Terraform Okta OAuth 2.0 client application okta_private_key_id - the private key ID for the Terraform Okta OAuth 2.0 client application. This is the ‘KID’ value, which can be obtained in the ‘Public Keys’ section of the OAuth 2.0 application configuration okta_secret_id - the AWS Secrets Manager ‘secret name’ for the private key of the Terraform Okta OAuth 2.0 client application. This is the ‘Secret name’ value, not the ‘Secret ARN’.

Replace all the placeholders in the vars-*.tfvars files. Refer to the following example as a reference:

region = "ap-southeast-2" okta_org_name = "atko" okta_base_url = "oktapreview.com" okta_scopes = [ "okta.groups.manage" ] okta_client_id = "0oaes123y1FekjfoE1d7" okta_private_key_id = "ievOgRgNc...aJJn5ra_4" okta_secret_id = "dev/okta_terraform_key" Connect Terraform code to Okta resources

The repository includes a directory module containing a resource okta_groups.tf, which we will use to provide a group for your Okta tenant. In doing so, we’ll also go through a core tenet of the previously mentioned variables, where we will define both input and output variables. This may be a little confusing initially, so take some time to understand how the different files and modules interact! The following diagram may help contextualize the various files we are going to step through:

Open terraform/modules/directory/variables.tf and uncomment the following entry. This is the variables file for the directory module and it defines which input variables are required. Each module you develop will have its own variables file.

variable "okta_group_name" { type = string }

Open terraform/modules/directory/okta_groups.tf and uncomment the following entry. This is a resource block. The resource block has two parts: firstly, the resource type, okta_group , and the resource name, okta_test_group. Feel free to change the resource block name (okta_test_group) to something you choose. Within the resource block body are the configuration arguments for the resource. We have one argument defined, which is the name, referencing the input variable okta_group_name

resource "okta_group" "okta_test_group" { name = var.okta_group_name }

Open terraform/variables.tf and uncomment the following entry. This is the variables file for the parent or main module. The variables within this file are assigned via the tfvars files, which are passed in with environment-specific configurations via the Terraform CLI:

variable "okta_group_name" { type = string }

Next, open terraform/main.tf and uncomment the following entry. The main file contains critical configurations for the backend and providers (like Okta or AWS). It also is where we reference any modules, including the directory module, via their path within the local repository. It’s also necessary to pass through any variables within this module block. You can manage variables in two ways:

Configure the variable values directly within the main file, which may be acceptable for any standardized or non-environment-specific variables Reference the parent module variables file like we have done, so in this example: okta_group_name = var.okta_group_name

Open terraform/dev.tfvars and terraform/prod.tfvars and uncomment the following entry. This sets the value of the okta_group_name variable for each respective environment. Feel free to change it and make the values environment-specific.

okta_group_name = "Okta Test Group GitHub Actions"

Now, we can stage our changes. Use git add to add the changes for the next commit.

git add .

Lastly, commit the changes:

git commit -m "Initial commit"

With the changes committed, we can now push your changes to the remote develop branch.

git push origin develop GitHub Actions triggers Terraform dev build

GitHub Actions is configured to trigger a build when changes are pushed to the develop branch. The workflow defined in the repository will:

Authenticate with AWS: Use GitHub OIDC to assume the necessary role. Run Terraform Commands: Execute terraform init, terraform plan, and terraform apply to deploy changes to the development environment.

Monitor the action in GitHub to ensure the build completes successfully, and check your Okta environment to observe the creation of the group using the name specified in the tfvars file.

If GitHub Actions has any errors, refer to the error message within the GitHub Actions workflow for further details.

If you missed any configurations within the repository files (e.g., backend-*.conf or vars-*.tfvars), make the changes locally and perform the git add, git commit, and git push commands again.

If you missed any configurations within Okta (e.g., OAuth 2.0 scopes) or AWS (e.g., IAM Role permissions, etc.), then correct the issue and re-run the GitHub Actions workflow from the GitHub Actions console on a failed workflow.

Create a pull request to merge code from the develop branch to the main branch:

Navigate to the repository on GitHub. Open a pull request from develop to main. Provide a detailed description of the changes and any context or considerations for the reviewers. GitHub Actions trigger Terraform prod plan

When a pull request is opened, GitHub Actions triggers a Terraform plan for the production environment. This plan will:

Authenticate with AWS: Use GitHub OIDC to assume the necessary role. Run Terraform Plan: Execute terraform init, terraform plan to show the potential changes without applying them against the production environment.

Reviewers can inspect the planned output to understand the impact of the changes before merging.

After reviewing and approving the pull request, merge it into the main branch. You can merge using the GitHub Pull Request user interface.

GitHub Actions trigger Terraform prod build

Merging to the main branch triggers a new GitHub Actions workflow. The workflow will:

Authenticate with AWS: Use GitHub OIDC to assume the necessary role. Run Terraform Commands: Execute terraform init, terraform plan, and terraform apply to deploy changes to the production environment.

Monitor the Actions tab to ensure the deployment completes successfully.

Learn more about Okta, Terraform, CI/CD patterns, and OAuth 2.0

In this article, we have outlined the architecture and steps needed to set up a secure and efficient CI/CD pipeline using GitHub Actions, Terraform, AWS, and Okta. By leveraging these technologies, we can automate infrastructure management, ensuring consistency and reducing the risk of manual errors. We covered the integration of GitHub with AWS for secure authentication and authorization, the configuration of Terraform for state management and secrets handling, and the overall workflow for deploying changes from development to production. If you found this post interesting, you may like these resources:

How to Secure Your Kubernetes Clusters With Best Practices How Can DevOps Engineers Use Okta? Store ASP.NET Secrets Securely with Azure KeyVault How to Deploy a .NET Container with AWS ECS Fargate

Stay tuned for subsequent articles for Okta recommended policies to help get you started with secure-by-design configurations from day one!

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. Leave us a comment below if you have any questions or requests for topics!


PingTalk

Achieving Zero-Impact Retail Identity Migration

Achieve a zero-impact retail identity migration. Learn how to seamlessly migrate your customer base to a new platform, enhance security, and boost customer satisfaction.

There’s little doubt that the COVID-19 pandemic and its aftereffects have had a dramatic impact on the retail sector, with many retailers seeing a marked uptick in traffic through their digital channels as consumers embrace the freedom of shopping from anywhere at any time. The eCommerce share of total sales continues to rise by roughly 7.5% annually, and hybrid shopping has taken off, with consumers shopping at least partially online 55% of the time.1 This increase in digital interaction tends to go hand in hand with a requirement for a modern, best-of-breed Customer Identity and Access Management (CIAM) platform in order to truly capitalize on the upsell and cross-sell opportunities that come from increased consumer stickiness and personalization.

 

While many retailers agree that legacy identity systems – often tightly coupled with existing CRM or ecommerce platforms – reduce agility and negatively impact user experience, it can seem daunting to embark on a migration project, particularly when the impacts of migrating a large existing customer base to that new platform are considered. Nevertheless, migration is often a necessary first step in fulfilling other key digital transformation initiatives.

 

Read on to see how retailers can alleviate these concerns and tackle the migration process with confidence – without leaving a single customer behind.

Thursday, 10. October 2024

SC Media - Identity and Access

Fidelity Investments confirms August breach affected 77K customers

Fidelity maintains that there’s no indication of a ransomware incident – and that no funds were stolen.

Fidelity maintains that there’s no indication of a ransomware incident – and that no funds were stolen.


KuppingerCole

Identity Fabrics

by Alejandro Leal Explore Identity Fabrics: the key to secure, scalable IAM, bridging legacy and modern systems in digital transformation. Learn more about how to select the solution that is right for you in our buyer's guide.

by Alejandro Leal

Explore Identity Fabrics: the key to secure, scalable IAM, bridging legacy and modern systems in digital transformation. Learn more about how to select the solution that is right for you in our buyer's guide.

Nov 14, 2024: Understanding the Impact of AI on Securing Privileged Identities

Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity at
Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity attack chain, requiring a fundamental shift in how we approach privileged identity security.

KILT

Expanding Horizons: KILT Token’s First Move Towards Multi-Chain With Ethereum

We are thrilled to announce an exciting new development for KILT, its community and all Polkadot parachains: a seamless token bridge to Ethereum with the Project Polar Path, developed by the KILT Core Team. Thanks to Polar Path, the KILT token is making its first leap beyond Polkadot and onto the Ethereum blockchain. This marks a crucial milestone in our journey towards a true multi-chain fu

We are thrilled to announce an exciting new development for KILT, its community and all Polkadot parachains: a seamless token bridge to Ethereum with the Project Polar Path, developed by the KILT Core Team.

Thanks to Polar Path, the KILT token is making its first leap beyond Polkadot and onto the Ethereum blockchain. This marks a crucial milestone in our journey towards a true multi-chain future, bringing more flexibility and exposure to KILT. In the near future, we plan to extend to even more blockchain networks, but today, we want to introduce the first step in this broader vision.

About Project Polar Path

Project Polar Path is a breakthrough feature for Polkadot parachains, enabling seamless token switches between Polkadot and Ethereum. At its core, Polar Path is a pallet specifically designed for Polkadot parachains. It has already been implemented on the KILT blockchain, leveraging Snowbridge — a secure, trustless bridge connecting Polkadot and Ethereum.

Snowbridge has already allowed ERC-20 tokens to cross over from Ethereum to Polkadot, but so far, the reverse has been a challenge. That’s where Polar Path comes in. It enables the conversion of native parachain tokens (like KILT) into an Ethereum-compatible version (ERC-20) and vice versa, providing a solution for parachains looking to expand their presence on Ethereum.

General Parachain Token Conversion vs. KILT’s Implementation

Polar Path’s ability to switch parachain tokens into ERC-20 tokens can be used by any parachain on Polkadot. However, the implementation for KILT includes a key distinction: when switching KILT tokens between Polkadot and Ethereum, the total supply of KILT always remains constant. For each KILT token that exists on the KILT blockchain, there is exactly one KILT ERC-20 token on Ethereum, and vice versa.

Whenever a KILT token is switched to an ERC-20 KILT token on Ethereum, the corresponding KILT token on Polkadot is locked and unavailable. Conversely, when the ERC-20 KILT token is sent back to Polkadot, it is locked and unavailable, and the original KILT token is unlocked. This ensures a one-to-one relationship between KILT on Polkadot and its bridged ERC-20 counterpart on Ethereum, maintaining the overall token supply and preventing inflation or duplication of tokens across chains.

Why Polar Path Matters

Expanding to Ethereum and other ecosystems creates significant opportunities for KILT and similar parachain projects:

Increased Exposure: Integrating with Ethereum connects KILT tokens to one of the largest decentralized finance (DeFi) ecosystems, allowing holders to access a broad array of DeFi products and services. Wider Reach: Ethereum’s vast developer and user community enhances KILT’s visibility, driving broader recognition and adoption within the crypto space. Trustless, Secure Switches: Thanks to Snowbridge’s architecture, Polar Path ensures all token switches between networks remain secure and trustless, preserving user confidence without sacrificing security. Visual Interface for Token Switching

To make the process user-friendly, the Galani Projects team, a valued contributor to the Polkadot community, has developed a proof-of-concept web interface. With this tool, users can seamlessly switch their KILT tokens to ERC-20 tokens using wallets connected to both networks. The interface makes it easy for users to specify the amount, source, and destination accounts, and with a few clicks, they can complete the switch.

In the future, Galani Projects plans to make this code open-source, enabling other parachain projects to use the interface for their tokens as well.

Overcoming Development Challenges

One of the key challenges in developing Polar Path was navigating Polkadot’s Cross-Consensus Message Format (XCM) and the associated fees. To simplify the user experience, we devised a solution that allows users to pay XCM fees in DOT by sending them to the KILT sovereign account on Polkadot’s AssetHub. Once the DOTs arrive, they are burned, and an XCM callback is triggered automatically. This process takes only about 30–40 seconds and eliminates the need for users to interact directly with KILT for fee payments.

Future Plans for KILT: Multi-Chain Expansion

Our Ethereum expansion is just the beginning. We are actively working on extending KILT’s multi-chain presence to other prominent blockchain networks. The vision is to have KILT tokens accessible across various ecosystems, offering even more flexibility and opportunities for our community.

Stay tuned as we continue to roll out updates and new features that will further enhance the flexibility, security, and functionality of KILT tokens across different networks!

You can learn how to bridge your KILT tokens here.

About KILT Protocol

KILT is an identity blockchain for generating decentralized identifiers (DIDs) and verifiable credentials, enabling secure, practical identity solutions for enterprises and consumers. KILT brings the traditional process of trust in real-world credentials (passport, driver’s license) to the digital world while keeping data private and in possession of its owner.

Expanding Horizons: KILT Token’s First Move Towards Multi-Chain With Ethereum was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Switch and Trade Polkadot and Ethereum Tokens with Polar Path

The KILT team is pleased to announce a new “Polar Path” feature that allows users to switch tokens between Polkadot parachains and the Ethereum network. Polar Path is a pallet for Polkadot parachains, already implemented on the KILT chain, that allows parachains to make their native token accessible on the Ethereum network. The Snowbridge-based web app uses the Polar Path pallet to provide a

The KILT team is pleased to announce a new “Polar Path” feature that allows users to switch tokens between Polkadot parachains and the Ethereum network. Polar Path is a pallet for Polkadot parachains, already implemented on the KILT chain, that allows parachains to make their native token accessible on the Ethereum network.

The Snowbridge-based web app uses the Polar Path pallet to provide a front end that lets you visually switch tokens between parachain and ERC-20 tokens and transfer switched tokens between Ethereum and parachain networks via Snowbridge.

Project Polar Path is funded by the Polkadot community and developed by the KILT core team. This guide shows how to use the KILT parachain, but the web app will support other parachains in the future.

Prerequisites

To use Polar Path and Snowbridge, you need a Polkadot wallet and Ethereum wallet extension added to your browser. You need to set up the accounts and have sufficient funds in each account to cover the transaction fees.

Set up wallets Metamask Install the metamask extension Create an account Import the KILT token by clicking + Import tokens, paste “0x5d3d01fd6d2ad1169b17918eb4f153c6616288eb” into the Token contract address field, and click Next. Polkadot Wallet Install a Polkadot wallet, e.g polkadot{.js} extension, Talisman, Subwallet etc. Create a new account if you don’t already have one. Switching and Transferring Switching between Asset Hub and KILT

To switch from KILT to Asset Hub, you need:

Any Polkadot wallet which allows an Asset Hub account with a connected KILT account, KILT tokens, and DOTs on KILT and Asset Hub to pay XCM fees. Asset Hub account, with the same address as your KILT account and DOTs for the existential deposit.

To switch from Asset Hub to KILT, you need:

Any Polkadot wallet with a connected Asset Hub account, KILT tokens on AssetHub, and DOTs to cover transaction fees Asset Hub account, which is the same address as your KILT account and DOTs. Transferring tokens between Asset Hub and Ethereum

To transfer from Asset Hub to Ethereum, you need:

Asset Hub account, which is the same address as your KILT account and enough DOT to cover transaction fees, KILT tokens on Asset Hub Ethereum Wallet, with an account added

To transfer from Ethereum to Asset Hub, you need:

Ethereum wallet, with an account added and enough ETH to cover transaction fees on Ethereum mainnet Asset Hub account, with enough DOT to cover the existential deposit Switching Tokens

Go to app.snowbridge.network

To switch between KILT and ERC-20 tokens, select the Polar Path tab. Click Connect Polkadot to connect the app to your wallet extension.

Choose the source and destination networks, and the app loads accounts from the connected wallets to show the Source Account options. The Beneficiary account is always identical to the Source Account. As you change the source or destination network, the other drop-down menu adjusts accordingly to match the appropriate opposite source or destination.

Finally, set the amount. The app now estimates the transfer and XCM fees. If you don’t have sufficient DOT on KILT to cover the XCM fees, don’t worry — clicking the submit button opens a pop-up that lets you transfer some of your DOT from Asset Hub to KILT.

Click the Submit button, sign the transaction in your wallet, and wait for the switch to complete.

Transferring Tokens

Go to app.snowbridge.network

Select the Transfer tab to transfer switched tokens between Ethereum and Asset Hub. Click Connect Ethereum to connect the app to your wallet extension and set the Source Account.

Select the Beneficiary account via the dropdown, which populates from the connected Polkadot wallet.

Finally, set the amount and select KILT as the token you want to transfer. The app now estimates the transfer fees. If you don’t have sufficient ETH to cover the fees, don’t worry — clicking the submit button opens a pop-up that lets you transfer some of your ETH.

Click the Submit button to initiate the transfer.

Note: Transfers to Asset Hub take around 20 minutes to complete. Transfers to Ethereum can take up to 40 minutes.

Switch and Trade Polkadot and Ethereum Tokens with Polar Path was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF110 Completes and DF111 Launches

Predictoor DF110 rewards available. DF111 runs Oct 10— Oct 17, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 110 (DF110) has completed. DF111 is live today, Oct 10. It concludes on October 17. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF110 rewards available. DF111 runs Oct 10— Oct 17, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 110 (DF110) has completed.

DF111 is live today, Oct 10. It concludes on October 17. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF111 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF111

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF110 Completes and DF111 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 09. October 2024

paray

The Need to Comply With the CTA comes Into Focus

October 8, 2024 was a bellwether date for those waiting on a court to clarify whether the statutory requirement for filing BOI Reports sits on solid ground.  It was on October 8, 2024 when the oral argument in the pending Eleventh Circuit appeal from Small Bus. United d/b/a Nat’l Small Bus. Ass’n v. Janet Yellen, … Continue reading The Need to Comply With the CTA comes Into Focus →
October 8, 2024 was a bellwether date for those waiting on a court to clarify whether the statutory requirement for filing BOI Reports sits on solid ground.  It was on October 8, 2024 when the oral argument in the pending Eleventh Circuit appeal from Small Bus. United d/b/a Nat’l Small Bus. Ass’n v. Janet Yellen, … Continue reading The Need to Comply With the CTA comes Into Focus →

KuppingerCole

Adopting Passwordless Authentication

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape. Join our webinar to explore the transformative potential of passwordless authentication solutions within modern enterprises. As busi

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.

Join our webinar to explore the transformative potential of passwordless authentication solutions within modern enterprises. As businesses expand and adopt more flexible work models, the inefficiencies and security risks of traditional password systems are increasingly apparent. This session will introduce market trends and insights based on the latest KuppingerCole research, providing a well-rounded perspective on the current and future landscape of enterprise security solutions.

As an expert in digital identity and cybersecurity, KuppingerCole analyst Alejandro Leal will guide attendees through the evolving landscape of passwordless authentication. He will highlight the key features common to various market solutions, along with recent developments and future trends. Drawing from extensive research, Alejandro will emphasize the critical importance of user-friendly and secure authentication methods. He will focus on practical steps organizations can take to effectively implement these technologies, enhancing their security posture and operational efficiency.




SC Media - Identity and Access

SharePoint, OneDrive and Dropbox targeted by BEC attacks

Threat actors step up BEC attacks that rely on sophisticated evasion techniques that result in financial fraud, data loss, and lateral movement.

Threat actors step up BEC attacks that rely on sophisticated evasion techniques that result in financial fraud, data loss, and lateral movement.


Transforming Identity Security for a Dynamic Digital World

For cybersecurity professionals seeking the most current insights and solutions to keep up with such high demand, SailPoint Technologies' Navigate conference offers a prime opportunity. The 11th annual event, running Oct. 21-24 in Orlando, is expected to draw 1,500 decision-makers, administrators, operators, and developers for four days of sessions, keynotes, networking, and training.

For cybersecurity professionals seeking the most current insights and solutions to keep up with such high demand, SailPoint Technologies' Navigate conference offers a prime opportunity. The 11th annual event, running Oct. 21-24 in Orlando, is expected to draw 1,500 decision-makers, administrators, operators, and developers for four days of sessions, keynotes, networking, and training.


Microsoft Entra (Azure AD) Blog

Microsoft Security announcements and demos at Authenticate 2024

The Microsoft Security team is excited to connect with you next week at Authenticate 2024 Conference, taking place October 14 to 16 in Carlsbad, CA! With the rise in identity attacks targeting passwords and MFA credentials, it’s becoming increasingly clear that phishing resistant authentication is critical to counteract these attacks. As the world shifts towards stronger, modern authentication met

The Microsoft Security team is excited to connect with you next week at Authenticate 2024 Conference, taking place October 14 to 16 in Carlsbad, CA! With the rise in identity attacks targeting passwords and MFA credentials, it’s becoming increasingly clear that phishing resistant authentication is critical to counteract these attacks. As the world shifts towards stronger, modern authentication methods, Microsoft is proud to reaffirm our commitment to passwordless authentication and to expanding our support for passkeys across products like Microsoft Entra, Microsoft Windows, and Microsoft consumer accounts (MSA). 

 

To enhance security for both consumers and enterprise customers, we’re excited to showcase some of our latest innovations at this event: 

 

Expanded passkey support for Microsoft Entra ID  Passkey support for Microsoft Consumer Accounts (MSA)  Passkeys on Windows: Authenticate Seamlessly with Passkey Providers 

 

We look forward to demonstrating these new advancements and discussing how to take a comprehensive approach to modern authentication at Authenticate Conference 2024. 

 

 Where to find Microsoft Security at Authenticate 2024 Conference   

Please stop by our booth to chat with our product team or join us at the following sessions:  

  

Session Title  

Session Description  

 Time 

Passkeys on Windows: Paving the way to a frictionless future! 

UX Fundamentals

 

Discover the future of passkey authentication on Windows. Explore our enhanced UX, powered by Microsoft AI and designed for seamless experiences across platforms. Join us as we pave the way towards a passwordless world. 

 

Speakers: 

Sushma K. Principal Program Manager, Microsoft 

Ritesh Kumar Software Engineer, Microsoft 

October 14th  

 

12:00 - 12:25 PM 

Passkeys on Windows: New platform features 

Technical Fundamentals and Features

 

This is an exciting year for us as we’re bringing some great passkey features to Windows users. In this session, I’ll discuss our new capabilities for synced passkeys protected by Windows Hello, and I’ll walk through a plugin model for third-party passkey providers to integrate with our Windows experience. Taken together, these features make passkeys more readily available wherever users need them, with the experience, flexibility, and durability that users should expect when using their passkeys on Windows.  

 

Speaker: 

Bob Gilbert Software Engineering Manager, Microsoft 

October 14th 

 

2:30 - 2:55 PM 

We love passkeys - but how can we convince a billion users? 

Keynote

 

It’s clear that passkeys will be core component of a passwordless future. The useability and security advantages are clear. What isn’t as clear is how we actually convince billions of users to step away from a decades-long relationship with passwords and move to something new. Join us as we share insights on how to accelerate adoption when users, platforms, and applications needs are constantly evolving. We will share practical UX patterns and practices, including messaging, security implications,  

and how going passwordless changes the concept of account recovery.  

 

Speakers:  

Scott Bingham Principal Product Manager, Microsoft  

Sangeeta Ranjit Group Product Manager, Microsoft 

  October 14th 

 

5:05 – 5:25 PM 

  

Stop by our booth #402 to speak with our product team in person!  

  

Stop counting actors... Start describing authentication events 

Vision and Future  

 

We began deploying multifactor authentication because passwords provided insufficient security. More factors equal more security, right? Yes, but we continue to see authentication attacks such as credential stuffing and phishing! The identity industry needs to stop thinking in the quantity of authentication factors and start thinking about the properties of the authentication event. As we transition into the era of passkeys, it’s time to consider how we describe the properties of our authentication event. In this talk, we’ll demonstrate how identity providers and relying parties can communicate a consistent, composable collection of authentication properties. To raise the security bar and provide accountability, these properties must communicate not only about the authentication event, but about the security primitives underlying the event itself. These properties can be used to drive authentication and authorization decisions in standalone and federated environments, enabling clear, consistent application of security controls.  

 

Speakers: 

Pamela Dingle Director of Identity Standards, Microsoft  

Dean H. Saxe Principal Engineer, Office of the CTO, Beyond Identity 

October 16th 

 

10:00 – 10:25 AM 

Bringing passkeys into your passwordless journey 

Passkeys in the Enterprise

 

Most of our enterprise customers are deploying some form of passwordless credential or planning to in the next few years, however, the industry is all abuzz with excitement about passkeys. What do passkeys mean for your organization’s passwordless journey? Join the Microsoft Entra ID product team as we explore the impact of passkeys on the passwordless ecosystem, share insights from Microsoft's own passkey implementation and customer experiences.

 

Speakers: 

Tim Larson – Senior Product Manager, Identity Network and Access, Security, Microsoft 

Micheal Epping – Senior Product Manager, Microsoft 

 October 16th 

11:00 – 11:25 AM 

 

We can’t wait to see you in Carlsbad, CA for Authenticate 2024 Conference   

  

 Jarred Boone, Senior Product Marketing Manager, Identity Security  

 

 

Read more on this topic 

Expanded passkey support for Microsoft Entra ID  Passkey support for Microsoft Consumer Accounts (MSA) 

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog   ⁠⁠Microsoft Entra blog | Tech Community   ⁠Microsoft Entra documentation | Microsoft Learn  Microsoft Entra discussions | Microsoft Community  

SC Media - Identity and Access

New fraud intelligence unveiled by Trulioo

Aside from being integrated with the firm's Identity Document Verification, Know Your Customer data checks, and Watchlist Screening, Trulioo Fraud Intelligence generates accurate fraud risk scores for organizations through predictive risk intelligence, machine learning, and industry models, according to the company.

Aside from being integrated with the firm's Identity Document Verification, Know Your Customer data checks, and Watchlist Screening, Trulioo Fraud Intelligence generates accurate fraud risk scores for organizations through predictive risk intelligence, machine learning, and industry models, according to the company.


Microsoft Entra (Azure AD) Blog

What's new in Microsoft Entra - September 2024

We’re excited to announce the general availability of Microsoft Entra Suite—one of the industry’s most comprehensive secure access solutions for the workforce. With 66% of digital attack paths involving insecure credentials1, Microsoft Entra Suite helps prevent security breaches by enabling secure access to cloud and on-premises apps with least privilege, inside and outside the corporate perimeter

We’re excited to announce the general availability of Microsoft Entra Suite—one of the industry’s most comprehensive secure access solutions for the workforce. With 66% of digital attack paths involving insecure credentials1, Microsoft Entra Suite helps prevent security breaches by enabling secure access to cloud and on-premises apps with least privilege, inside and outside the corporate perimeter. It unifies network access, identity protection, governance, and verification to streamline onboarding, modernize remote access, and ensure secure access to apps and resources. Get started with a Microsoft Entra Suite trial.

 

Last November, we launched the Secure Future Initiative (SFI) at Microsoft to combat the increasing scale of cyberattacks. Security now drives every decision we make, as detailed in the September 2024 SFI Progress Report. Today, we’re sharing new security improvements and innovations across Microsoft Entra from July to September 2024, organized by product to help you quickly find what’s relevant to your deployment.

 

Watch the video "What's New in Microsoft Entra" for a quick overview of product updates and visit the What's New blade in the Microsoft Entra Admin Center for detailed information.

 

 

Microsoft Entra ID

 

New releases

Admin provisioning of FIDO2 security keys (passkeys) on behalf of users Insider risk condition in Conditional Access Device based conditional access to M365/Azure resources on Red Hat Enterprise Linux Attacker in the Middle detection in Identity Protection Active Directory Federation Services (AD FS) application migration wizard Microsoft Authenticator on Android is FIPS 140 compliant for Entra authentication

 

Change announcements

 

Security improvements

 

Upcoming MFA enforcement on Microsoft Entra admin center

[Action may be required]

 

As part of our commitment to providing our customers with the highest level of security, we previously announced that Microsoft will require multifactor authentication (MFA) for users signing into Azure. We’d like to share an update that the scope of MFA enforcement includes Microsoft Entra admin center in addition to the Azure portal and Intune admin center. This change will be rolled out in phases, allowing organizations time to plan their implementation:

 

Phase 1: Starting on or after October 15, 2024, MFA will be required to sign into the Entra admin center, Azure portal, and Intune admin center. The enforcement will gradually roll out to all tenants worldwide. This phase will not impact other Azure clients such as the Azure Command Line Interface, Azure PowerShell, Azure mobile app, and Infrastructure as Code (IaC) tools.

 

Phase 2: Beginning in early 2025, gradual enforcement of MFA at sign-in for the Azure CLI, Azure PowerShell, Azure mobile app, and Infrastructure as Code (IaC) tools will commence.

 

Microsoft will send a 60-day advance notice to all Entra global admins by email and through Azure Service Health Notifications to notify them of the start date of enforcement and required actions. Additional notifications will be sent through the Azure portal, Entra admin center, and the M365 message center.

 

We understand that some customers may need additional time to prepare for this MFA requirement. Therefore, Microsoft will allow extended time for customers with complex environments or technical barriers. The notification from us will also include details about how customers can postpone the start date of enforcement for their tenants, the duration of the postponement, and a link to apply. To learn more, read the blog, “MFA enforcement for Microsoft Entra admin center sign-in coming soon.”

 

Date change announcement: Deprecation of keychain-backed device identity for Apple devices

[Action may be required]

 

Earlier this year, we announced the upcoming deprecation of keychain-backed device identity for Apple devices on the Microsoft Entra ID platform. The previously announced deprecation date of June 2026 has been accelerated to June 2025 as part of our commitment to secure design and defaults. This change is being made to enhance device security and better protect your data.

 

Once in effect, this deprecation will ensure that newly registered Apple devices managed by Microsoft Entra ID use strong, hardware-bound cryptographic secrets, backed by Apple’s Secure Enclave. To learn more, we encourage you to review our updated documentation on this deprecation. We advise both consumers and vendors of applications to test their software for compatibility with this new datastore.

 

Upgrade to the latest version of Microsoft Entra Connect by April 2, 2025

[Action may be required]

 

In early October 2024, we will release a new version of Microsoft Entra Connect Sync that contains a back-end service change that further hardens our services. To avoid service disruptions, customers are required to upgrade to that version (2.4.XX.0) by early April 2025 (exact deadline to be announced upon version release).

 

Review our roadmap for a timeline of upcoming releases, so that you can plan your upgrade accordingly. We will auto-upgrade customers where supported, alongside an early 2025 release of Connect Sync. For customers who wish to be auto-upgraded, ensure that you have auto-upgrade configured.

 

For a list of minimum requirements and expected impacts of the service change, please refer to this article. For upgrade-related guidance, check out our docs.

 

New Certificate Authorities (CAs) for login.microsoftonline.com: Action required from customers who only trust DigiCert certificates

[Action may be required]

 

Microsoft Entra ID is introducing new Certificate Authorities (CAs) for server certificates for the domain login.microsoftonline.com. Currently, connections to login.microsoftonline.com are exclusively presented with DigiCert certificates. Starting on October 1, 2024, you may also encounter certificates issued by Microsoft Azure CAs. This update is designed to enhance security and improve the resilience of Entra ID. This could impact customers who do not trust Microsoft Azure CAs or have pinned client-side to DigiCert certificates, as they may experience authentication failures.

 

Recommended Action:

To prevent potential issues, we recommend trusting all Root and Subordinate CAs listed in the Azure Certificate public documentation. This documentation has included Microsoft Azure CAs for over a year. If you are an Entra ID user who uses the login.microsoftonline.com domain, it’s crucial to remove any client-side pinning to DigiCert and trust the new Azure CAs for a seamless transition. For more details on how to ensure uninterrupted and secure service, please read the Client Compatibility for public PKIs documentation.

 

Microsoft Copilot update to enterprise data protection

[No action is required]

 

Last month, we made several updates to the free Microsoft Copilot service for users with a Microsoft Entra account to enhance data security, privacy, and compliance and simplify the user experience. For users signed in with an Entra account, Microsoft Copilot will offer enterprise data protection (EDP) and redirect users to a new simplified, ad-free user interface designed for work and education. 

 

With EDP in Microsoft Copilot, your data is private, it isn’t used to train foundation models, and we help protect it at rest and in transit with encryption. For more details on EDP, please review our documentation.

 

If you or your users have a Microsoft 365 subscription in addition to an Entra account, you can enable in-app access by pinning Microsoft Copilot. If you elect to pin Microsoft Copilot for your users, it will appear in the Microsoft 365 app starting mid-September, and it will be coming soon to Microsoft Teams and Outlook. Additional functionality in Microsoft Copilot like chat history is also available for users with a Microsoft 365 subscription.

 

For additional information about these changes, whether you or your users have a Microsoft 365 subscription or not, please visit our blog and FAQ.

 

We hope you are as excited as we are about these updates to Microsoft Copilot. If you would like to try Microsoft Copilot updated with enterprise data protection prior to mid-September, a private preview is available (space limited). To apply, please fill out our form.

 

Enable Browser Access (EBA) by default for all Android users

[No action is required]

 

As part of ongoing security hardening, we are deprecating the Enable Browser Access (EBA) user interface in the Android Authenticator and Company Portal apps. Consequently, browser access will be enabled by default for all Android users. This change will occur automatically, so no action is required from admins or Android users.

 

Restricted permissions on Directory Synchronization Accounts (DSA) role in Microsoft Entra Connect Sync and Cloud Sync

[No action is required]

 

As part of ongoing security hardening, we’ve removed unused permissions from the privileged "Directory Synchronization Accounts" role. This role is exclusively used by Connect Sync and Cloud Sync to synchronize Active Directory objects with Entra ID. There is no action required by customers to benefit from this hardening. Please refer to the documentation for details on the revised role permissions.

 

Upcoming improvements to the SSO enrollment dialog

[No action is required]

 

We’re making some improvements to the end user experience when users add their account to a Windows device. We've refined the messaging in the SSO enrollment dialog (consent) to make it easier for end users to understand the choice(s) they can make and the impact of their choice(s). The changes also include a 'Learn more' link on the screen. The link points to a Microsoft Learn article that provides users with more information that will further enable them to make informed choice(s). The new SSO enrollment dialog will be gradually introduced starting in October 2024. Please check here for more details.

 

Identity modernization

 

Important Update: Azure AD Graph Retirement

[Action may be required]

 

The retirement of the Azure AD Graph API service began on 1 September 2024, and will eventually impact both new and existing applications. As we deploy the phase starting over the coming weeks, new applications will not be able to use Azure AD Graph APIs unless they are configured for extended access. Microsoft Graph is the replacement for Azure AD Graph APIs, and we strongly recommend immediately migrating use of Azure AD Graph APIs to Microsoft Graph and limiting any further development using Azure AD Graph APIs.

 

Timeline for incremental retirement of Azure AD Graph API service  

 

Phase start date  

Impact to existing apps  

Impact to new apps  

1 September  2024  

None.  

New apps are blocked from using Azure AD Graph APIs, unless the app is configured to allow extended Azure AD Graph access by setting blockAzureAdGraphAccess to false.   Any new apps must use Microsoft Graph  

1 February 2025   

Application is unable make requests to Azure AD Graph APIs unless it is configured to allow extended Azure AD Graph access by setting blockAzureAdGraphAccess to false.    

1 July 2025  

Azure AD Graph is fully retired. No Azure AD Graph API requests will function.  

Action required:

 

To avoid service disruptions, please follow our instructions to migrate applications to Microsoft Graph APIs.

If you need to extend Azure AD Graph access for an app to July 2025

 

If you have not fully completed app migrations to Microsoft Graph, you can extend this retirement. If you set the blockAzureADGraphAccess attribute to false in the application’s authenticationBehaviors configuration, the application will be able to use Azure AD Graph APIs through June 30, 2025. Further documentation can be found here.  

 

New applications will receive a 403 error when attempting to access Azure AD Graph APIs unless this setting is set to false. For existing applications that will not complete migration to Microsoft Graph in 2024, you should plan to set this configuration now. 

 

If you need to find Applications in your tenant using Azure AD Graph APIs 

 

The Microsoft Entra recommendations feature provides recommendations to put your tenant in a secure and healthy state, while also helping you maximize the value of the features available in Entra ID.    

 

We’ve provided two Entra recommendations that show information about applications and service principals that are actively using Azure AD Graph APIs in your tenant. These new recommendations can support your efforts to identify and migrate the impacted applications and service principals to Microsoft Graph. 

 

References:

Migrate from Azure Active Directory (Azure AD) Graph to Microsoft Graph  Azure AD Graph app migration planning checklistAzure AD Graph to Microsoft Graph migration FAQ  

 

Important Update: AzureAD PowerShell and MSOnline PowerShell retirement

[Action may be required]

 

As of March 30, 2024, the legacy Azure AD PowerShell, Azure AD PowerShell Preview, and MS Online modules are deprecated. These modules will continue to function through March 30, 2025, after which they will be retired and stop functioning. Microsoft Graph PowerShell SDK is the replacement for these modules and you should migrate your scripts to Microsoft Graph PowerShell SDK as soon as possible. 

 

To help you identify usage of Azure AD PowerShell in your tenant, you can use the Entra Recommendation titled Migrate Service Principals from the retiring Azure AD Graph APIs to Microsoft Graph. This recommendation will show vendor applications that are using Azure AD Graph APIs in your tenant, including AzureAD PowerShell. 

 

We are making substantial new and future investments in the PowerShell experience for managing Entra, with the recent Public Preview launch of the Microsoft Entra PowerShell module. This new module builds upon the Microsoft Graph PowerShell SDK and brings scenario-focused cmdlets. It’s fully interoperable with all cmdlets in the Microsoft Graph PowerShell SDK, enabling you to perform complex operations with simple, well documented commands. The module also offers a backward compatibility option to simplify migration from the deprecated AzureAD Module.

 

Microsoft Graph APIs were recently made available to read and configure Per-user MFA settings for users, and availability in Microsoft Graph PowerShell SDK cmdlets is soon to follow.

 

License assignment modifications will no longer be supported in the Microsoft Entra Admin Center

[Action may be required]

 

This is a courtesy reminder that, in mid-September, we rolled out a change that no longer supports the modification of user and group license assignments in the Microsoft Entra Admin Center and the Microsoft Azure Admin Portal. Moving forward, you will have read-only access to license assignments in these portals. If you wish to modify user and group license assignments via the user interface, you will need to visit the Microsoft 365 Admin Center. Please note that this change does not impact the API or PowerShell modules. If you experience any issues with license assignment, please reach out to Microsoft 365 support. To learn more, click here.

 

Dynamic type versioning in Bicep templates for Microsoft Graph

[Action may be required]

 

In October 2024, we're introducing an update to the Bicep templates for Microsoft Graph public preview. The dynamic types feature enables semantic versioning for Microsoft Graph Bicep types for both beta and v1.0. During Bicep file authoring, you specify a Microsoft Graph Bicep type version referenced from the Microsoft artifact registry, instead of using a built-in Nuget package which is the current experience. Using dynamic types will allow for future breaking changes in existing Microsoft Graph Bicep resource types without impacting deployment of your existing Bicep files that use older versions of those resource types. 

 

Built-in types are deprecated and will be retired on January 24, 2025. Until the retirement date, built-in types will coexist with the new dynamic types. Any Microsoft Graph Bicep type changes will only be available through new versions of the dynamic types.

 

Action required:

 

Switch to the new dynamic types before 24th January 2025 to avoid Bicep template deployment failures. The switch will involve making some minor updates to your bicepconfig.json and main Bicep files. Additionally, to take advantage of any updated or new Microsoft Graph resource types, you will need to update the type version that your Bicep files use. For next steps, click here.

 

Retirement of legacy user authentication methods management experience in Entra Portal

[No action is required]

 

Starting October 31st, 2024, we will retire the ability to manage user authentication methods in the Entra Portal via the legacy user interface (UI). Instead, we will only surface the modern UI which has full parity with the legacy experience in addition to the ability to manage modern methods (e.g. Temporary Access Pass, Passkeys, QR+Pin, etc.) and settings. This will not impact how end users can manage their own authentication methods or their ability to sign-in to Entra. Learn more at Manage user authentication methods for Microsoft Entra multifactor authentication.

 

Deprecating Enable Browser Access (EBA) UI

[No action is required]

 

EBA is a feature in Android broker apps (such as Company Portal and Authenticator) that enables duplicating the Entra ID device registration certificate to a global keychain location on the Android device. This allows browsers that are not integrated with brokers, such as Chrome, to access the certificate for device authentication, which is required to comply with Entra device compliance policies.

 

As part of our overall security hardening efforts, we're migrating Entra ID device registration certificates and Android device identities to be hardware-bound. This will enable token protection policies in the future and protect against bypassing device compliance policies. Since the device identity will be hardware-bound, the EBA UI will no longer be able to duplicate and export keys on demand. We plan to deprecate the Enable Browser Access (EBA) UI in the Authenticator and Company Portal apps, and browser access (e.g., Chrome) will automatically be enabled during device registration.

 

This capability already exists for Intune MDM users. The change extends it to non-Intune users, such as those using VMWare and Jamf mobile device management (MDM) software. This will apply to all customers in the first half of the 2025 calendar year. No action is required from customers at this time.

 

Deferred changes to My Groups admin controls

[No action is required]

 

In October 2023 we shared that starting June 2024 the existing Self Service Group Management setting in the Microsoft Entra Admin Center that states "restrict user ability to access groups features in My Groups" would be retired. These changes are under review and will not take place as originally planned. A new deprecation date will be announced in the future.

 

My Security Info Add Sign-In Method picker user interface update

[No action is required]

 

This is a courtesy reminder that, starting in August 2024, the "Add Sign-In Method" dialog on the My Security Info page was updated with improved sign-in method descriptions and a modern look and feel. With this change, when users click "Add Sign-In Method," they will initially be recommended to register the strongest method available to them, as allowed by the organization's authentication method policy. Users will also have the option to select "Show More Options" and choose from all available sign-in methods permitted by their policy. No admin action is required.

 

Provisioning UX modernization

[No action is required]

 

We’re modernizing the current application/HR provisioning and cross-tenant sync UX. This includes a new overview page, user experience to configure connectivity to your application, scoping, and attribute mappings experience. The new experience includes all functionality available to customers today, and no customer action required. The new experience will start rolling out at the end of October 2024, but customers can still use the existing experience through January 2024. 

 

Enhancing user experience

 

Moving from a browse-based to a search-based solution for access package discovery

[Action may be required]

 

We're excited to introduce a new feature in My Access: a curated list of recommended access packages. This will allow users to quickly view the most relevant access packages without scrolling through a long list.  The final tab will be a complete, searchable list of all visible access packages in the tenant. We’ll deploy this to all customers as an opt-in preview by the end of October, with in-product messaging to highlight the change. By the end of November, it will transition to an opt-out preview, with general availability planned for December.

 

Microsoft Entra ID Governance 

New releases

Enable, Disable and Delete synchronized user accounts with Lifecycle workflows Configure Lifecycle Workflow Scope Using Custom Security Attributes Workflow History Insights in Lifecycle Workflows Configure custom workflows to run mover tasks when a user's job profile changes Grace period support for entitlement management-managed guest users Cross-tenant manager synchronization

 

Microsoft Entra External ID 

New releases

Easy authentication with Azure App Service and Microsoft Entra External ID Microsoft Entra External ID extension for Visual Studio Code 

 

Microsoft Entra Verified ID 

New releases 

Verified ID Face Check Enabling public publishing of custom credentials via the admin portal UI 

 

Microsoft Entra Internet Access 

New releases 

Microsoft Entra Internet Access 

 

Microsoft Entra Private Access 

New releases 

Microsoft Entra Private Access 

 

Global Secure Access: Microsoft Entra Internet and Microsoft Entra Private Access

 

Change announcements

 

Upcoming license enforcement for Microsoft Entra Internet Access and Microsoft Entra Private Access

[Action may be required]

 

Starting early October 2024, license enforcement will begin in the Microsoft Entra admin center for Microsoft Entra Internet Access and Microsoft Entra Private Access. This is following a 90-day notification period, starting with the general availability of Microsoft Entra Internet Access and Microsoft Entra Private Access, which began in July 2024. Learn more about Global Secure Access

 

30-day trials are available for both licenses. Learn more on pricing. 

 

Best Regards,

Shobhit Sahay

 

 

What’s New in Microsoft Entra 

Stay informed about Entra product updates and actionable insights with What’s New in Microsoft Entra.  This new hub in the Microsoft Entra admin center offers you a centralized view of our roadmap and change announcements across the Microsoft Entra identity and network access portfolio. 

  

Learn more about Microsoft Entra 

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

SC Media - Identity and Access

Inching toward identity authentication perfection: Passwordless, secretless

No matter how much security training an organization conducts for its staff, if your plan involves people not failing, then your plan is going to fail, said Portnox CEO Denny LeCompte.

No matter how much security training an organization conducts for its staff, if your plan involves people not failing, then your plan is going to fail, said Portnox CEO Denny LeCompte.


Trinsic Podcast: Future of ID

Rohan Pinto - 1Kosmos's Journey from Blockchain to Passwordless Authentication

In this episode of The Future of Identity Podcast, I’m joined by Rohan Pinto, Co-founder and CTO of 1Kosmos, a company at the forefront of decentralized identity and passwordless authentication solutions. We explore the evolution of identity management and the journey from blockchain-based beginnings to building secure, user-controlled identity systems that go beyond traditional centralized approa

In this episode of The Future of Identity Podcast, I’m joined by Rohan Pinto, Co-founder and CTO of 1Kosmos, a company at the forefront of decentralized identity and passwordless authentication solutions. We explore the evolution of identity management and the journey from blockchain-based beginnings to building secure, user-controlled identity systems that go beyond traditional centralized approaches.

We dive into several key topics, including:

- Rohan’s background in identity and access management, and his transition into building cryptographic solutions that emphasize user control over their identities.
- The role of blockchain as an enabler in identity verification and why it’s not the complete solution to today’s identity challenges.
- 1Kosmos’s unique approach to authentication, including their pivot from blockchain to passwordless access using biometric verification.
- The challenges and potential of user-controlled identity and verifiable credentials, and why widespread adoption has been slower than expected.
- Rohan’s perspective on the future of identity, including how decentralized identifiers and biometrics will reshape how we access systems and interact with digital services.

Rohan shares insights from his new book and offers a deep dive into the complexities and opportunities of building a more secure, user-centric identity ecosystem. This episode is a must-listen for anyone interested in the future of identity, security, and the evolving digital landscape.

You can learn more about 1Kosmos at 1kosmos.com.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


Thales Group

Thales to supply handheld thermal imagers to the Canadian Army

Thales to supply handheld thermal imagers to the Canadian Army prezly Wed, 10/09/2024 - 15:05 The Thales Sophie Ultima long-range handheld thermal imagers have been selected by the Canadian Armed Forces, the first contract awarded under Canada’s Night Vision Systems Modernization (NVSM) project. Manufactured and maintained in Canada, the Sophie Ultima will enhance operational ca
Thales to supply handheld thermal imagers to the Canadian Army prezly Wed, 10/09/2024 - 15:05 The Thales Sophie Ultima long-range handheld thermal imagers have been selected by the Canadian Armed Forces, the first contract awarded under Canada’s Night Vision Systems Modernization (NVSM) project. Manufactured and maintained in Canada, the Sophie Ultima will enhance operational capabilities for the Canadian Army with advanced technology and resilient navigation. This contract award further affirms Thales’ commitment to Canada with significant local industrialization, skills development and training in Quebec.
©Thales

Thales Canada is pleased to announce that the Government of Canada has awarded a contract to Thales Canada for the acquisition of its Sophie Ultima Handheld Thermal Imager (HHTI) as part of the Night Vision Systems Modernization (NVSM) project. This award marks an important advancement in Canada’s defence capabilities, ensuring that the Canadian Armed Forces (CAF) are equipped with cutting-edge technology designed to excel in complex and challenging operational environments.

The Sophie Ultima, a lightweight, handheld thermal imager, is engineered to deliver extraordinary performance in the field. With a high performance infrared channel, it offers NATO tank recognition range performance of up to 6 kilometres. The continuous optical zoom and wide 20° field of view enable operators to maintain visual contact with targets during detection, recognition, and identification phases, ensuring rapid and precise engagement.

Thales will manufacture and maintain the Sophie Ultima at its existing Canadian Electro-Optics Center of Excellence, further strengthening Canada’s defence industrial base. This initiative will create new jobs and spur economic growth, expanding Thales’s current supply chain within Canada. In addition, the Thales Optronics facility in Montreal will provide comprehensive in-service support, ensuring that the Canadian Armed Forces benefit from a dedicated repair facility with rapid turnaround, reducing equipment downtime.

“Thales is committed to delivering advanced, reliable, and locally supported solutions like the Sophie Ultima,” said Benoit Plantier, Vice President, Optronics, Missile Electronics and Unmanned Air Systems, Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

About Thales Canada

A Canadian leader in research and technology, Thales Canada combines over 50 years of experience with the talent of over 1,300 skilled people from coast-to-coast. Thales Canada offers leading capabilities in the defence, civil aviation, digital identity and security sectors – meeting the most complex needs and requirements of its customers across all operating environments.

/sites/default/files/prezly/images/Design%20sans%20titre%20%2824%29.png Documents [Prezly] Thales to supply handheld thermal imagers to the Canadian Army.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Alice Pruvot, Head of Media Relations, Aeronautics & Defense 09 Oct 2024 Type Press release Structure Defence and Security Defence Canada Thales Canada is pleased to announce that the Government of Canada has awarded a contract to Thales Canada for the acquisition of its Sophie Ultima Handheld Thermal Imager (HHTI) as part of the Night Vision Systems Modernization (NVSM) project. This award marks an important advancement in Canada’s defence capabilities, ensuring that the Canadian Armed Forces (CAF) are equipped with cutting-edge technology designed to excel in complex and challenging operational environments. prezly_695524_thumbnail.jpg Hide from search engines Off Prezly ID 695524 Prezly UUID c301c66a-7982-4a0f-b0e3-e90aeb208157 Prezly url https://thales-group.prezly.com/thales-to-supply-handheld-thermal-imagers-to-the-canadian-army Wed, 10/09/2024 - 17:00 Don’t overwrite with Prezly data Off

Ockto

CCD2 komt eraan: Dit is waar kredietverstrekkers mee te maken krijgen

De consumptief kredietmarkt staat aan de vooravond van een belangrijke verandering met de aangescherpte Consumer Credit Directive 2 (CCD2). Deze nieuwe Europese richtlijn zal impact hebben op de manier waarop kredietverstrekkers opereren en nieuwe partijen onder toezicht brengen die daar nu nog buiten vallen. De CCD2 heeft als doel de consumentenbescherming te versterken en een gelijk s

De consumptief kredietmarkt staat aan de vooravond van een belangrijke verandering met de aangescherpte Consumer Credit Directive 2 (CCD2). Deze nieuwe Europese richtlijn zal impact hebben op de manier waarop kredietverstrekkers opereren en nieuwe partijen onder toezicht brengen die daar nu nog buiten vallen. De CCD2 heeft als doel de consumentenbescherming te versterken en een gelijk speelveld te creëren voor kredietaanbieders in Europa.


CCD2 en de verhoogde regeldruk bij consumptief krediet

Deze aflevering van de Data Sharing Podcast gaat over de Consumer Credit Directive 2 (CCD2). Hidde gaat in gesprek met Earvin van Ginkel, senior beleidsmedewerker bij de Vereniging voor Financieringsondernemingen Nederland (VFN).

Deze aflevering van de Data Sharing Podcast gaat over de Consumer Credit Directive 2 (CCD2). Hidde gaat in gesprek met Earvin van Ginkel, senior beleidsmedewerker bij de Vereniging voor Financieringsondernemingen Nederland (VFN).


Ocean Protocol

bci/acc: A Path to Balance AI Superintelligence

bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence An e/acc zoom-in on brain interfaces, towards human superintelligence Summary Artificial superintelligence (ASI) is perhaps 3–10 years away. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI
bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence An e/acc zoom-in on brain interfaces, towards human superintelligence Summary

Artificial superintelligence (ASI) is perhaps 3–10 years away. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence (HSI).

bci/acc draws on today’s technologies without requiring big scientific breakthroughs. It’s e/acc zoomed-in for BCI. It’s solarpunk: optimistic and perhaps a little gonzo. And it could be a grand adventure for Humanity.

Based on talks at Foresight Institute in Dec 2023 & Nov 2023 [video] and NASA Oct 2023. They extend this 2016 blog post, and this 2012 talk at BrainTalks@UBC.

=== Contents ===

1. Introduction

2. Artificial Superintelligence
2.1 How market forces drive ASI
2.2 The journey to artificial superintelligence
2.3 ASI Risk
2.4 Approaches to ASI Risk
- Decelerate -> let evolution happen -> speed it up (e/acc)
- Cage -> fancier cage
- Align post-hoc -> dumb-to-smart chain -> during training
- Get competitive (bci/acc)

3. Human Superintelligence, via bci/acc
3.1 Introduction
- High-bandwidth BCI challenges
- Implants-first vs masses-first
3.2 Baseline tech for bci/acc
- EEG for typing; for focus, more
- Glasses with subtitles; with voice interface
- AR Goggles + hand gestures: Meta Quest 3
- AR Goggles + eye-tracking: Apple Vision Pro
- Eye-tracking is BCI
3.3 BCI killer apps
- Silent messaging; internal dialog
- Perfect memory; share visual memories
- Talk in pictures; talk in brain signals
3.4 The journey to high-bandwidth BCI
- Bandwidth++ via implants; via optogenetics. Bike-shedding.
- Invasive BCI into mainstream -> growth
- Your BCI will be part of *you* -> hyper-local alignment
3.5 The journey to human superintelligence
3.6 Cognitive liberty

4. Conclusion
5. Appendix 1. Introduction

It was summer 1995. In the pages of Wired magazine, I read about a new product called MindDrive: “The first computer product operated by human thought”. I was skeptical. But I had to try it! So I dropped $150 and got one.

I’d slip the MindDrive on my index finger, and boot up into the game “MindSkier”. I’d ski downhill in first-person view, and try to steer between the 30 or so pairs of gates. I’d steer by “thinking”. It was actually an echo of my thoughts: the device’s gold-plated sensor tracked my skin conductivity (GSR). I would miss about 30% of the gates, compared to missing 80% of them if the device wasn’t on my finger at all. It worked, barely. A starting point for the next!

Left: the MindDrive. Right: In Rosie Revere, Engineer, Rosie’s great aunt teaches her a brilliant lesson.

At a giant engineering science fair, I set up the MindDrive for anyone to try. There was a line around the block [Spec1999]. There was a latent interest in BCI.

In 2001, I splurged $2K and bought an “Interactive Brainwave Visual Analyser” (IBVA). I’d wear a blue headband holding sticky electrodes to sense electrical signals on my forehead, i.e. an electroencephalogram (EEG). It sent the EEG signals to my computer, which got displayed as animated 3d graphics. More usefully, I could access the signals directly with my own software — so I did. I could hack BCI! Alas, it was hard to get good signals. I also tried OCZ NIA and Emotiv EPOC later on, but they weren’t qualitatively better.

From these limited experiments — and adjacent work in AI and analog circuits — I had a feeling that BCI bandwidth could be optimized a lot. This 2012 work from Tsinghua University confirmed my hunch, achieving moderate typing speeds [Tsh2012]. A decade of optimizing later, we’re now at 62 words per minute (very good).

High-bandwidth BCI is not a scientific mystery; it’s an engineering problem.

Why might we be interested in high bandwidth BCI?

The answer is artificial superintelligence (ASI): AI machines with 1000x+ the cognitive ability of humans. ASI may happen as soon as 3-10 years from now. Market forces are pushing it into existence because there’s a lot of money at stake.

How do we, as humans, have a role in a world of AI machines with 1000x our cognitive abilities?

Humans need a substrate that’s actually competitive to ASI: silicon. The best way to do that is brain-computer interfaces (BCIs). We’ve got to do this soon enough for ASI time scales, therefore, we need to accelerate BCI and get mass adoption. The net result will be human superintelligence (HSI).

The rest of this article has two sections:

Artificial superintelligence (ASI): what’s driving ASI, ASI risk, and approaches to address risk. Human superintelligence: how to accelerate BCI and achieve mass adoption, to get Humanity competitive with ASI. 2. Artificial Superintelligence (ASI) 2.1 How market forces drive ASI

Market forces have been driving AI compute up. The plot below shows how the compute for AI training has risen, from about 1950 until now (2024). The y-axis is exponential, and each tick is another order of magnitude. Therefore while the drawn curve is linear, the trend is exponential.

Market forces are driving AI compute up. [Graph from LessWrong.com, with my 20 PFLOPs overlay]

The compute has grown quickly. It started with 100 (1⁰²) floating point operations per second (FLOPs) in 1950, to 1⁰²⁴ now. That’s 22 orders of magnitude of compute power in three-quarters of a century. To intuit just how much growth this is: it’s the difference between 1mm, vs flying to Alpha Centauri and back 10,000 times.

From this growth, we now have a lot of compute. To help intuition: George Hotz frames 20 PetaFLOP/s as “1 person” worth of brainpower (compute). This is akin to 746 Watts being “1 horse” worth of power (1 hp). Just as it’s easier to reason about horses worth of power, it’s easier to reason about persons worth of compute. We surpassed “1 person” worth of compute in about 2012. Now we’re 10 million times beyond; it’s like all the brainpower of NYC rolled into one compute system.

Market forces have driven compute up because it meant more money. More compute unlocked more markets, each which was highly lucrative: from space & radio to TV, from the PC to the cellphone, from the smartphone to AI now and AR/VR soon. AI has a voracious appetite for compute, with $ benefits that accrue. That’s why there’s so much money flowing into AI right now, and no sign of abating.

2.2 Path to ASI

For decades, we’ve had AIs that can do tasks that only a human could previously do. That is, narrow AI. Examples are antenna design and analog circuit synthesis. Almost as long, we’ve had AIs that can do a task at a level far exceeding a human. These are also called narrow AI. Examples are digital circuit synthesis and software compilers.

We’re about to get AI that can do all tasks that only a human can previously do. That is, artificial general intelligence (AGI). To riff on Paul Graham, AIs have will have progressed from “smart” (good at one thing) to “wise” (decent at everything).

Market forces will drive AGI from 1x smarter than humans, to 2x, to 10x, then 100x, then 1000x. It will happen quickly: there is $ to be made. We’ll arrive at AI that can do all tasks at a level far exceeding any human. That is, artificial superintelligence (ASI).

ASIs will be wildly smarter than humans. In humans 2 is an idiot and 6 is an Einstein; so what is 1000 or 1,000,000? [Rutt2024a] It’s such a difference that it’s hard to imagine as a possibility; this cognitive dissonance will prevent most people from truly realizing this until it’s right upon them.

ASIs will be wildly smarter than humans [From @AiSafetyMemes] 2.3 ASI Risk

Humans are 1000x+ smarter than ants. As humans, we don’t respect the rights of ants or “what the ants have to say”. We are their gods.

ASIs will be 1000x+ smarter than humans. We are now the ants. There is little guarantee that ASIs will respect our rights. This is ASI risk.

What will it feel like? God-like intelligence will beget god-like power: the ASIs will become our gods. In the Hyperion sci-fi series, ASIs exist, yet humans still barely comprehend them, except to know that ASI power is unimaginably vast [Hyp1989].

What can we do about ASI risk? Section 2.4 reviews various ideas.

2.4 Approaches to ASI Risk 2.4.1 Idea: decelerate

Yudkowsky and others advocate to slow down or pause AI progress, then figure out how to solve ASI risk. It’s highly appealing at first glance. As with all such ideas: one must be careful, because wishing doesn’t make it true.

Alas, there is a problem: for such a deceleration to work, all deceleration efforts would need to be successful. If even just one entity defects, they could dominate the others. And that’s why this route likely won’t happen. There’s an AI race; at the core, it’s China vs USA, and there’s too much at stake for one side to cede speed to the other. So the race will go on. It’s like nuclear: for all the disarmament theatre, we still have the nukes.

Alas, what might happen is deceleration for all players except the US government and Chinese government (plus their proxies), and organized criminals. This hurts human freedom because it diminishes “voice” and “exit” for individuals, not to mention freedom to work on solving ASI risk 🤦 [Verd2023]. It’s a common trick for governments to use the banner of safety to take further control [Snow2013].

“They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.” — Benjamin Franklin

Perhaps most importantly, this “approach” doesn’t actually address the problem of ASI risk. That is, if AI was decelerated, then we’d still have to solve the core problem of ASI risk! That’s what most other ASI-risk approaches aim to do.

2.4.2 Idea: let evolution happen

“Let evolution happen” is the framing of Google founder Larry Page, and many others. They see humans as simply one step in the tree of evolution; that ASI is the next step; that we should be proud that we made the next step happen; and that if our biological bodies can’t compete (they can’t) then we should let go and get over it; that this is evolution.

From my work on evolutionary computation, I’ve seen how powerful evolution can be. It doesn’t matter whether we like this framing, this really could be the scenario that happens.

However, letting go is not a solution to ASI risk. Personally, I’d love to keep building and playing for as long as I can, in a grand adventure, until I opt-in to end that adventure. While people have invented a thousand rationalizations for death, I choose life until further notice. Humanity should be the same. We have a potential grand adventure in front of us! So we should rage, rage against the dying of the light. Humanity should choose life until further notice.

2.4.3 Idea: speed it up (e/acc)

Effective accelerationism” (e/acc) is a movement sparked by @BasedJeffBezos and @BayesLord, and extended & promoted by technologist / VC Marc Andreesen, among others. I find myself aligned with most of e/acc philosophy: grounded in physics, optimistic, build-up not tear-down, and more.

e/acc’s approach to AI is “let everyone have at it, speed it up”. It aims for a multi-polar AI world: thousands (or millions or billions) of superintelligent AIs or entities with superintelligent AIs, keeping each other in check. It’s a bit like the USA which balances power among three entities (legislative, executive, judiciary). Or, it’s like blockchains which balance power among thousands of nodes.

Therefore, perhaps surprisingly, e/acc is likely safer than the “deceleration” approach (which only has balance among two powers) 😲!

e/acc is also open to human superintelligence (HSI), but with no no special emphasis. It’s meant to be an umbrella idea, for others to add detail with zoom-ins.

Vitalik Buterin’s “decentralized accelerationism” (d/acc) zooms in on e/acc that emphasizes decentralized technologies with a bit more bias towards safety. Like e/acc, it’s open to HSI, though with no special emphasis.

Among those thousand or billion+ superintelligent AI entities, e/acc assumes that at least some of them will be friendly to humans; and that they will help humans have a role in the future. But what if the friendly ones are overruled by the unfriendly ones? And as the ASI risk introduction covered, why would gods bother treating ants well?

Fortunately, e/acc is sufficiently broad that it allows for variants not needing this assumption. The most promising variant is: use BCI to get a competitive substrate, with mass adoption. That’s bci/acc! This post will elaborate below.

2.4.4 Idea: put it in a cage, unplug if things go awry

First, some background. You can think of Bitcoin as a really dumb robot that does just one thing: maintain a ledger of transactions. Yet it’s also sovereign: it answers to no one, it is its own independent entity, you can’t unplug it. Similarly, Ethereum is sovereign. The Uniswap V2 decentralized exchange contracts running on Ethereum are sovereign too: they answer to no one. Arweave permanent data storage is sovereign. Ocean Predictoor AI-powered data feeds are sovereign. Every smart contract that doesn’t have governance is sovereign. Finally, the internet itself is sovereign. Building sovereign software systems is a solved problem. Appendix 5.1 elaborates.

With that background in place, let’s review the idea: “put the ASI in a cage, and unplug if things go awry”.

Here’s one problem: you can’t unplug it. The ASI is smart, so it’s already made itself decentralized, therefore sovereign, therefore un-unpluggable. Just like Bitcoin.

Some observers see this idea and other similarly glib “takes” as a waste of energy. The “AI Alignment Bingo” in Appendix 5.2 offers a concise (and hilarious) summary of many takes & responses.

2.4.5 Idea: fancier cage

The idea is to use advances in cryptography, blockchain, and more to make the cage “hack proof”. Sergey Nazarov of Chainlink is a proponent, among others.

The problem: humans are the weak link in computer systems. Hackers like Kevin Mitnick have made Swiss cheese of compute systems by tricking gullible humans to give him access, not by attacking the software or hardware directly. Therefore, the “fancier cage” idea is not feasible unless we 100% solve human gullibility (not going to happen).

Loki after tricking Thor to escape a fancy cage: “Are you ever not going to fall for that?” [Avengers 1] 2.4.6 Idea: align via a post-hoc wrapper

This is the approach that OpenAI took for GPT. The idea is akin to installing an aftermarket exhaust system on a new car, to tune behavior in a particular direction. For example, train an unconstrained LLM first; then tack on RLHFs training to align with human values. If all goes well, scale up this approach as we get to AGI and ASI.

Alas, it has been shown to be easy to jailbreak, with holes everywhere, as the world has witnessed on ChatGPT running GPT4. Finding issues and adding more constraints will end up as endless whack-a-mole. I’ve been there, for other AI problems. The root problem is that tacking a band-aid on such a core problem will (likely) never be enough.

Main: aligning an AI via a post-hoc wrapper is like adding an aftermarket exhaust system to your car. Bottom right: endless jailbreaks is like whack-a-mole, where as soon as you whack one issue, another pops up. 2.4.7 Idea: dumber AIs aligning smarter ones

This is the approach published by OpenAI in December 2023. The idea is to have a chain of AIs from dumb → smart, where each link is a dumber AI aligning the next-smarter AI.

Alas, this is only as strong as its weakest link (and links can be weak), there is risk of over-leverage (think 2008 financial crisis), and the ASI at the end of the chain might disagree or change the rules. Appendix 5.3 elaborates.

2.4.8 Idea: align the AI while training

Can we ever align something 1000x smarter than us? This idea side steps that concern in the near term in two complementary ways:

Diligently choose a training set based on strongly-held human values [Weng2023]. Start with 1x-level or even 0.1x-level as in a human baby. And then grow it to a child, then teenager, then adult, and beyond. Where it’s aligned the whole time [Goer2013].

This is akin to growing square watermelons, that grow subject to human-induced constraints 🍉 🤖. The hope — but not guarantee — is that as it goes from 1x to 10x and beyond, it remains aligned to human values.

This approach also assumes that data-centric learning will be the trick to get to ASI. It may be one of the most important, but maybe not the most important [Rutt2024b].

There’s promise to this idea; it’s worth trying.

2.4.9 Idea: bci/acc: get a competitive substrate via BCI

Silicon is a wildly powerful substrate: it already has amazing compute, storage and bandwidth and it keeps improving exponentially. It’s what’s powering AI, and soon, AGI and ASI.

This idea is: our current meatbag brains just can’t compete against silicon for processing power. It’s “1 person” of processing power vs 10 million.

Everything that silicon touches goes exponential: the “Silicon Midas Touch”. For our brains to compete with silicon, they must touch silicon. The higher bandwidth the connection, the more that our brains can unlock the power of silicon for our selves.

Therefore we need to swallow our pride, stop treating carbon like a deity, and get a competitive substrate: silicon. The specific “how” is brain-computer interfaces (BCIs), or uploading. The target is human superintelligence. Some call this “the merge”. Others, “intelligence amplification” (e/ai).

Given ASI timelines of 3–10 years, simply hoping for “the merge” means that the merge likely won’t happen fast enough. We need to accelerate it somehow. The options are BCIs or uploading. Where uploading is still mostly a scientific problem and way too far out to be relevant to ASI risk. In contrast, BCI has already matured past the science into engineering problems. Of the two, BCI is the most pragmatic.

We can’t just invent an amazing BCI technology. To truly counter ASI, we need to get it in the hands of the mainstream billions.

In short, we need to accelerate BCI and get it to mass adoption. This is what bci/acc is all about.

Another framing of bci/acc is (in one variant): is “align the AI at the core, as you train” but in the most hyper-localized way imaginable: train one AI for every single human, where each human is constraining the AI in real-time, and the AI starts small and grows iteratively. It’s a square-watermelon AI as a co-processor to your brain 🍉 🧠.

bci/acc: accelerate BCI and take it to the masses. It uses BCI killer apps like silent messaging (SMs) to create market demand, which in turn drive BCI device evolution to a substrate competitive with ASI 3. Human Superintelligence via bci/acc 3.1 Introduction

Accelerating BCI (bci/acc) is among the least-discussed approaches, yet it may have the best chance of success to address ASI. So it’s imperative that we explore bci/acc more deeply.

3.1.1 High-Bandwidth BCI challenges

To go all the way to human superintelligence, non-invasive BCI likely won’t have enough bandwidth. We will need super-high bandwidth BCI, via neural implants (invasive) or optogenetics (semi-invasive) or other such brain technologies.

Alas, going invasive or semi-invasive has its own challenges, on engineering, regulatory, and societal fronts:

Engineering. The main goal is to increase bandwidth — a hard enough thing on its own. Yet engineering must also solve critical privacy risks, lest we lose cognitive liberty. Regulatory. Getting approval for human trials on (semi) invasive brain technologies is currently a long, high-friction process in the name of safety of the test subjects. Alas, the current regulatory structure ignores the much larger risk of ASI risk to Humanity: a bike-shedding problem. How can we speed this up? Societal acceptance. Even if the devices existed and regulations were approved, invasive BCI currently feels icky to most people. This will affect Humanity’s ability to manage ASI risk. The Overton Window will likely need to shift so that mass society is more open to such technologies.

There are two different routes to solving challenges (1)(2)(3): implants-first and masses-first. Let’s explore each.

3.1.2 Implants-First Route

Elon Musk’s Neuralink has made great progress in the previous decade.

Tansu Yegen on Twitter: "🧠 Elon Musk announced the first successful Neuralink brain chip implant in a human. Think about telling someone 10 years ago that by 2024, we'd be on the brink of unlocking telepathy... pic.twitter.com/WHiL0GuCQw / Twitter"

🧠 Elon Musk announced the first successful Neuralink brain chip implant in a human. Think about telling someone 10 years ago that by 2024, we'd be on the brink of unlocking telepathy... pic.twitter.com/WHiL0GuCQw

Neuralink is perhaps the furthest along on engineering (1). Its path to regulatory (2) is to focus on healing people, which limits its speed. Societal acceptance (3) is on ice until regulatory (2) is much farther along. In short, its route is (full 1) → (full 2) → (full 3). While I’m a Neuralink fan, to maximize chance of success, I’d love to see more companies chase this route.

3.1.3 Masses-First Route

Given ASI timescales, the Neuralink route to (1)(2)(3) may not be fast enough. There’s another path: route (partial 1, full 2, full 3) → (better 1, full 2, full 3) → (full 1, full 2, full 3). That is: start with non-invasive BCI tech that has no regulatory issues, and get mass adoption. Use this mass adoption to grow societal openness and open up regulations towards (semi) invasive BCI.

The starting point is killer apps for healthy people with non-invasive tech.

Killer app. To hit the masses, BCI needs a killer app. We need to “make something people want”. Silent messaging (SMs) aka pragmatic telepathy is one candidate; perfect memory is another; there are more. Below, I explore candidate killer apps like silent messaging. Once we have that first killer app, we can expand to adjacent functionalities. Healthy people. To hit the masses, BCI needs to be optimizing healthy humans versus merely fixing human ailments. Otherwise it’s not mass-market enough. Non-invasive first. To hit the masses, the BCI needs to be non-invasive to start. Invasive won’t get enough takers at the beginning, and regulatory is a bottleneck. But to truly leverage the Silicon Midas Touch we must get to invasive. How? Pressure from market forces and ASI risk will take us over the hump. 3.1.4 Discussion & Outline

bci/acc allows for an implants-first route, a masses-first route, and other routes. We don’t know which will be best; we should explore all of them aggressively. Since Neuralink’s actions elaborate the implants-first route, much of this post will focus on the masses-first route. (To be clear, bci/acc includes all routes.)

The next sections elaborate on masses-first bci/acc as follows. First, I will briefly review some emerging technologies that will help. Then, I survey some candidate BCI killer apps. Then, I describe how demands from market forces and ASI risk will drive BCI performance up. Finally, I describe how many iterations take us to human superintelligence (HSI), a new phase for Humanity.

3.2 Baseline Tech for bci/acc

Humanity’s technology capability frontier keeps expanding. This section explores technologies on the market that are adjacent to bci/acc. They can be used as lego blocks towards launching the first BCI killer apps.

3.2.1 EEG for Typing (“Silent Messaging”)

EEG for typing keeps improving. As mentioned earlier, as of 2023 researchers could type via EEG at 62 words per minute. And it keeps getting better. How do you think Stephen Hawking wrote his books? (Yes, EEG.)

3.2.2 EEG for Focus

There are other companies targeting mainstream with EEG. For example, Neurable is making consumer BCI headphones to help people focus. You put on their headphones, which detect electrical signals on the skin around your ear, and they ping you when you fall out of focus. There’s also EEG to track emotions, alertness, arousal, meditation effectiveness, and more [Ref].

3.2.3 Subtitles on Glasses

XRAI, Vuzix and others offer glasses with subtitles, for the deaf: “hear with your eyes”. The glasses have a microphone to capture audio, then transcribes via AI-based voice recognition, then renders text to the subtitles display. The tech can be inexpensive since the subtitles can use 1970s-era LCD displays, and 99% of the rest can be on a smartphone.

3.2.4 AI-powered glasses with voice interface

11 years ago, we had Google Glass doing this. It was officially scrapped due to privacy concerns, and unofficially because society just wasn’t ready for it. Since then, we’ve had ten more years of smartphone evolution and adoption. We’re in an Instagram x TikTok era where privacy matters less, for better and for worse.

In October 2023, the Rayban | Meta Smart Glasses shipped. This device records and stores video directly from the glasses. You can tap it to send photos or videos to friends. There was no privacy or weirdness pushback. The Overton Window had shifted: 11 years was more than enough for society to be ready. From personal experience: they’re lightweight to wear, and according to Ray-Ban employees they’re selling briskly.

3.2.5 AR Goggles + Hand Gestures: Meta Quest 3

The Meta Quest 3 was released in October 2023. Whereas its predecessors were Virtual Reality (VR) goggles, it brings in the real world: Augmented Reality (AR), aka mixed reality or spatial computing. It scans your room, and renders real-plus-overlay into your headset’s display. It tricked my brain into “being there”. You can control it with hand gestures, but these are still unreliable; the Quest still supports handheld controllers.

3.2.6 AR Goggles + Eye Tracking: Apple Vision Pro

For any given device idea, Apple may iterate for years or decades before they release it, if ever. Why? Because they only release when the device not only “doesn’t suck”, but is actually pleasant or delightful to use. This was the case for phones, for tablets, and for cars (still a WIP).

It’s also the case for AR goggles. They have patents on AR going back two decades. Yet they finally put a device up for presale on Jan 19, 2024: The Apple Vision Pro. From Apple’s perspective, they’ve cracked AR well enough to release something pleasant or delightful.

What’s changed? Eye tracking based input. Eye tracking has been used for medical research for decades, and also more widespread things things like consumer marketing for 10+ years. You can use eye tracking to type, move a cursor, click a button, and more.

Apple Vision Pro has eye tracking. Knowing Apple’s approach to new devices, they probably already have interfaces to type, move a cursor, and click buttons — all hands free, accurate, and pleasant.

As it rolls out, there’s a good chance people will find it as magical as multi-touch in phones. Eye-tracking is to AR control, what multi-touch is to phones. It may be the remaining piece to take AR beyond video games and truly mainstream. And, it will become table stakes for AR; expect Quest 4 to have it.

I can’t emphasize this enough: eye-tracking may be the “unlock” that makes these head-mounted glasses or goggles actually useful.

3.2.7 Eye-Tracking is BCI (!)

Eye tracking offers the hands-free benefits of BCI with the accuracy of moving your hands. Eye tracking feels like BCI, moving your eyes doesn’t really feel like movement. Yet it’s nearly as accurate as moving your hands, because ultimately eye tracking is motor control.

If a 20-year-old university student’s eyes are bloodshot, there’s a good chance they are hungover, got little sleep, or both. To generalize this, our eyes tell a lot about our health. In the last few years, there’s been an explosion of research using HD images or videos for medical diagnosis or treatment. A recent “Frontiers in Neuroscience” edition had 23 articles dedicated to this topic, including this intro.

So: (1) Eye tracking takes HD videos of eyes (2) HD videos of eyes are sensors for brain activity (3)

HD video of eyes implies a BCI sensor. Modern eye-tracking takes HD videos of your eyes. Thus, modern eye tracking is BCI.

[Quote from Frontiers in Neuroscience] 3.3 Candidate BCI Killer Apps

We’ve covered how ASI is coming, and how Humanity’s best chance to stay competitive is to accelerate BCI and take it to the masses (bci/acc). To get BCI to mass adoption, we need an application of BCI that the masses really want to use — a killer app.

We don’t know which killer app might take off first. However, we can explore possibilities. This section reviews some of those.

3.3.1 Candidate killer app: Silent Messaging

Just as Neal Stephenson’s 1992 novel “Snow Crash” is the archetypical vision for Virtual Reality, Vernor Vinge’s 2006 novel “Rainbows End” is the archetype for Augmented Reality.

Infused throughout Rainbows End, there’s a special <sm> tag for when the characters are messaging each other with “silent messages” (SMs):

Vinge leaves the reader to infer what specifically SMs are. But one soon realizes that it’s messaging each other simply by thinking about it. Yes, telepathy, but presented as just part of the furniture, and it just works, therefore “pragmatic telepathy”.

SMing = silent messaging = sending text or voice by thinking about it. Send = eye-tracking / EEG / etc. Receive = subtitles on glasses.

How would we do this? One inputs messages via EEG BCI, eye-tracking, or subvocalization. One receives messages via subtitles on glasses or goggles, or audio in your ear.

Specific implementations are any combination of the above. Examples:

Glasses with subtitles + EEG BCI sensors on the top of the glasses touching your forehead inconspicuously An Apple Earbud-like device that captures sub-vocalizations, then synthesizes speech and outputs to others as audio. Apple Vision Pro for eye-tracking input and subtitles-based output. Therefore society may get (pragmatic) telepathy upon the release of Apple Vision Pro (!). 3.3.2 Candidate Killer App: Internal Dialog

Imagine Jiminy Cricket on your shoulder, sharing advice or facts when you call upon him. Without having to pull out your phone and type; without having to read results on your screen. “What’s the capital of Portugal?” “Is this person lying to me?” “What’s next on my TO-DO list today?”

To achieve this is straightforward: type with BCI / eye-tracking / subvocalization. It goes to a ChatGPT bot. And the output is rendered visually in the glasses / goggles or in audio.

3.3.3 Candidate killer app: Perfect Memory

Here, you record images / audio / video with glasses, goggles, or a necklace-style device like Rewind Pendant. This gets stored locally or globally.

You search for the recordings via EEG BCI, eye tracking, or sub-vocalization. Or, use near-infrared non-invasive BCI on the back of your scalp to see what’s going on in your visual cortex. It doesn’t need to be perfect; it just needs to be good enough to serve as a query across video feeds. Even ten years ago, research results were extremely promising.

Once you’ve found the memory, it gets rendered in the glasses or goggles.

You won’t have to retrieve by moving your fingers around or anything, you’ll just be moving your eyes around, or thinking with the EEG, and you’ll be able to retrieve these videos. Everything you saw, you’ll have perfect memory of. It will feel magical.

Perfect memory. (1) Record via glass gam, then store (2) retrieve via eye-tracking / EEG / etc (3) project result on glasses’ display 3.3.4 Candidate killer app: Share Visual Memories

Here, you search & retrieve videos like in the “perfect memory”.

Then, you click “share” and choose “to whom” via BCI / eye-tracking / sub-vocalization.

A picture’s worth a thousand words: we’ll be able to communicate with others at higher bandwidth than ever before.

3.3.5 Candidate Killer App: Talk in Pictures

Here, you share video to others, but no longer bound by what you’ve seen or found. Rather, you type (via BCI etc) to prompt a generative AI art system. You do this in real-time, and send the images / videos in real time to someone else. They see it and respond, in images / video.

Now. You’re. Talking. In. Pictures.

3.3.5 Candidate Killer App: Talk in Brain Signals

We go further than talking in pictures. If the devices always displayed raw brain signals alongside text or images, then over time our brains will learn the mapping. It won’t be much different than learning Spanish, sign language, or Morse code. Our brains can handle unusual inputs, like learning to see with your tongue. The net result: we could communicate directly with raw brain signals. AI research often finds “direct” to be better than using intermediate features, if there is enough data. It’s a brain-brain interface.

From this, a new kind of language — a neural language — could emerge, which will chunk lower-level abstractions into higher-level ones for higher bandwidth yet [Rutt2023c]. We’ll have transitioned from skeuomorphic languages for our brain (text/images as a bridge to the past, tuned for the outer world) to brain-native languages (tuned to our inner world).

This approaches the long-held science fiction dream of “mind meld” as “a telepathic union between two beings; in general use, a deep understanding.” We can start building primitive versions now.

Mind-meld: talk in pictures, raw brain signals, or a new neural language 3.4 The Journey to High-Bandwidth BCI

We’ve discussed the risk from ASI, how BCI is the most pragmatic path, BCI challenges (engineering, regulatory, societal), and possible BCI killer apps to kick-start usage by the masses. What then? This section explores how market forces and ASI risk will drive further evolution and adoption of BCI, including a transition to more invasive technologies.

3.4.1 Introduction

A silicon stack co-brain offers 100x+ more storage, and 100x+ more compute, compared to bio-stack brains (our current brains). Alas, these are held back by the low bandwidth between the bio-stack brain and the silicon-stack co-brain.

Non-invasive techniques like EEG, eye-tracking and subvocalization can only take us so far [BciTech]. There’s an upper bound to their bitrates; it’s not very high; and we’ll probably squeeze every last bit from them.

And. There are invasive techniques that promise 100x+ more bandwidth. Most promising are chip implants, and optogenetics. Let’s review those, then see how those might enter mainstream usage by healthy humans.

3.4.2 Bandwidth++ via Implants

Here, a doctor or machine opens up a portion of your skull, slips in a chip, and seals it back up. That chip then talks to your brain, and wirelessly to computers. 100x+ the bandwidth compared to EEGs, boom.

Research has happened for decades. Neuralink is a leading example. It’s in early stages of human trials.

Implants (conceptual) 3.4.3 Bandwidth++ via Optogenetics

Optogenetics enables reading & writing on the brain. One gets an injection containing a “useful virus” that changes specifically targeted neurons to fire when light is shined on them; and more. Put precisely:

“Optogenetics is a technique to control or to monitor neural activity with light which is achieved by the genetic introduction of light-sensitive proteins. Optogenetic activators [“opsins”] are used to control neurons, whereas monitoring of neuronal activity can be performed with genetically encoded sensors for ions (e.g. calcium) or membrane voltage. The effector in this system is light that has the advantage to operate with high spatial and temporal resolution at multiple wavelengths and locations”.

Optogenetics research is proceeding. As of 2021, there were four clinical trials involving optogenetics (on humans).

Optogenetics is promising for mass BCI because it’s less invasive than chip implants (injection vs surgery) and maybe more bandwidth (across the whole brain, yet fine-grained).

However, due to genetic manipulation and coaxing our brains to fire photons, many side effects are possible. For example, what if the brain fires too much and causes a seizure? Nonetheless, given ASI risk, research needs to proceed with even more urgency than before. It will need to get past a bike-shedding problem, as the next section elaborates.

Optogenetics (conceptual) 3.4.4 Invasive BCI regulation has a bike-shedding problem

None of the research on implants or optogenetics is (officially) aimed at healthy humans; it’s all for fixing human ailments.

Why? Because it’s already super-hard to get regulatory approval for human trials for the latter; going for the former has seemed unattainable.

Why? Put yourself in the shoes of a regulator. You’re used to balancing risk vs reward for a narrowly-scoped problem to fix a specific human medical ailment. You’re not used to balancing risk vs reward for a civilization-scoped issue, to avoid a non-medical existential risk for all Humanity. (Despite being the gatekeeper for that.)

So what do you do? You focus on what you know, and dismiss away the existential risk. This has a term: bike shedding. When a safety committee for a nuclear power plants spends 95% of its time discussing the bike shed because they aren’t equipped to do anything about the big hairy nuclear risk issue.

BCI research is being bike-shedded right now. I’m hopeful that this will change as regulators and their higher-ups recognize the issue.

3.4.5 What will tip invasive BCI into the mainstream?

Given the current regulatory constraints, how can invasive BCI accelerate into the mainstream? I see two main forces driving demand to make this happen: fixing ASI risk, and market forces.

ASI Risk. Ideally the regulators of large nations recognize the bike-shedding bias and reduce BCI restrictions, perhaps being super-aggressive to accelerate BCI via a “BCI Manhattan Project”. This could build on existing BCI-for-defense research like DARPA’s decades-long program.

Smaller hungry nations may take the lead, for the $ and the PR. There’s $ and PR incentive to nations that loosen rules to meet market demand, such as Estonia’s E-residency, China’s Shenzhen special economic zone, and Singapore’s crypto regulations. There’s a growing trickle in the medical domain. For starters, via the Zuzalu project, Montenegro recently lightened rules to catalyze longevity research. Most interestingly, Honduras already has very light rules for medical testing: Bryan Johnson recently leveraged it to get a novel gene therapy there; there’s nothing stopping aggressive BCI testing in Honduras. Growing movements like Network State and Blueprint will further catalyze this jurisdictional arbitrage for invasive BCI.

Market forces. Consumers who start with non-invasive BCI will demand more performance, therefore more bandwidth, which means invasive BCI. Thus, there’ll be a bottom-up consumer push for invasive BCI.

When consumers see others using BCI for medical treatment that receive benefits far beyond getting healthy again, they’ll get particularly insistent.

People with the $ who are ready to accept the high risk and high reward will fly to Honduras for medical invasive-BCI tourism. Or they’ll build their own, just as Professor Xavier built Cerebro BCI in X-Men. Military BCI will leak into criminals and the black market, then into mainstream to satiate demand, like in Strange Days and Cyberpunk: Edgerunners. Businesses will sprout up to get “medical BCI” in the hands of anyone who asks, like we saw for medical marijuana in California.

The $ and risk tolerance to get high-bandwidth BCI first will enjoy a significant advantage. This will raise legitimate questions about fairness. Ideally, cost and risk will come down quickly, to make it broadly accessible. Let’s see.

Left: Professor Xavier using Cerebro in X-Men. Middle: Spinal-implant BCI in Cyberpunk Edgerunners. 3.4.6 Mainstream invasive BCI will grow, a lot

We just covered how invasive BCI will tip into the mainstream. What happens next? It’s the economics, silly. There was great demand even before invasive BCI, despite limited bandwidth. Invasive BCI will unlock massive bandwidth, critically, to a market demanding it. So BCI growth rate will steepen.

The BCI market will merge with the $500B smartphone market, if it hadn’t already done it pre-invasive. iPhone 20 or 25 will be BCI-based, perhaps via a merge with Apple Vision Pro. Meta Quest 7 or 10 will get invasive BCI to complement eye-tracking and other non-invasive BCI. Neuralink will launch their “phone”. Expect Samsung, Microsoft, OpenAI and others to get in the game too. A lot of $ is up for grabs.

Shmoji on Twitter: "whenever people ask Elon why none of his companies have made a phone yet, he responds "Neuralink"You wont need a phone / Twitter"

whenever people ask Elon why none of his companies have made a phone yet, he responds "Neuralink"You wont need a phone

What then? The devices will evolve and improve, subject to intense competition, one generation to the next, like smartphones did in the past 40+ years.

Market forces and the Silicon Midas Touch drive performance. We’ll see 10x in bandwidth, which unlocks 10x+ more storage and compute. Then 100x in bandwidth, unlocking more storage and compute yet. Then, especially once we go semi-invasive/invasive with optogenetics or implanted chips, we’ll see 100x+ bandwidth, and corresponding 100x+ in storage and compute.

Moore’s Law, and AI improvements within BCI apps will further catalyze usefulness and demand. As a recent example, “BrainGPT” uses LLMs to interpret brain data for significant error reductions.

3.4.7 Your BCI will be part of *you*

We are all natural born cyborgs: when you ride a bike, it becomes part of you, as far as your brain is concerned. Same for keyboards.

The same will be true for BCI.

As far as your brain is concerned, your BCI — and the computers that you access — will be part of you.

3.4.8 Hyper-localized AI Alignment

You’ll be a cyborg with a bio-stack meatbag brain and silicon-stack brain working in unison. This feels as natural as using a keyboard or a bicycle.

The compute & storage of the silicon-stack will have its own AI-based automation, to abstract complexity from the bio-stack side. As its compute & storage grows, we can expect emergent intelligence (in the John Holland sense). Then a concern arises: could the silicon stack AI take over the bio stack? The alignment problem rears its head here too! Fortunately, there’s a natural solution.

To maximize the chance that the silicon stack stays aligned with us, we ensure that processing or storage does not outpace bandwidth, at each evolutionary step along the way. This is no guarantee however: what if the silicon-side starts accessing way more compute from the internet?

This is different from traditional AI alignment approaches: here we are aligning the AI in real time, aligning it with our selves. It’s hyper-localized to each of us. It’s one aligned AI per human, rather than 1 or 10 for the human race. Therefore it’s 10 billion times more fine-grained and personalized. It’s AI alignment taken to the limit (in the calculus sense). There’s no guarantee that this will work. But it’s highly promising.

3.5 The Journey to Human Superintelligence

This essay started with the ASI risk, and has shown a path to accelerate BCI and take it to the masses. The previous section showed how market- and risk-driven evolution took us to high-bandwidth BCIs. This section picks up from that.

At first, the silicon-stack brain’s power will be much weaker than the bio-stack one, bottlenecked by bandwidth. Then, we’ll increase bandwidth iteratively, with corresponding unlocks in compute and storage.

The silicon-stack side will get par with the bio-stack side.

Then it will start to surpass it.

We’ll keep going, as the market will demand it. [Rutt2024d]

The silicon-stack side will become radically more powerful than the bio-stack side.

And that will be fine with us! We’ll have gone through each BCI generation iteratively, vs being sprung on us all at once. Our worries will abate as the silicon-stack AI will be aligned.

In fact, the silicon-stack will feel like part of *you* as far as you’re concerned, because you’re a natural-born cyborg along with the rest of us. The emergent patterns of intelligence on the silicon-stack side will be wholly our own.

It will feel like the most natural thing in the world.

Each of us will grow our compute & storage by 10x, 100x, 1000x, more. The silicon-stack emergent patterns of intelligence — part of us — will grow 1000x too. Yet we will still be humans.

We will have grown to achieve human superintelligence.

There’s more. Let’s say you’ve got to 1000x storage & compute, 1000x intelligence via the silicon-stack side of your self. Let’s say you’re now 90 years old and on your deathbed. Your bio-stack body and brain is dying.

Yet your bio-stack brain is now only 1/1000 the intelligence of the silicon-stack side. It’s probably been annoying you for a while, perhaps holding you back. And now it’s really holding you back, lying there.

What do you do?

You clip it like a fingernail.
And now you are on a pure silicon stack.

There’s more. Consider the possibility that in 100 years (or 20) that the majority of intelligences will be on a silicon or post-silicon substrate. Some will have human origin, some will have pure AI origin, and some will have a mix. They will all be general; they will all be sovereign; they will all be superintelligent. They are Sovereign General Intelligences (SGIs).

What will the landscape look like? Hyperion Cantos provides inspiration. SGIs will inhabit the datumplane: “common ground for man, machine, and AI.”

The datumplane: common ground for man, machine and AI

So we have a path to unbound ourselves from biological constraints, while retaining our humanity. Which makes it a great time to ask. How big can you dream? What’s the biggest thing that civilization could possibly achieve? What do we want, as Humanity?

I mean Humanity in the broadest sense of the word: not just humans, but the multiple layers of civilization that encompass humans. Our thoughts and dreams, our patterns of intelligence, and how we want to self-actualize as a civilization.

As a baseline, we definitely know we don’t want to die, whether from asteroid strikes, nuclear holocaust or AIs terminating us all. “Not die” is a starting point. Accelerating BCI helps address all of those, because it allows us to easily be multi-planetary and be competitive with pure AIs.

“Not die” is an “away” viewpoint. Can we be more positive than that, with a “towards” perspective? Several steps more optimistic is: explore the cosmos, Star Trek style. That would be a grand adventure for Humanity on its own.

We can do better yet: let’s reshape the cosmos! Build Dyson Spheres to harness the power of stars directly — Kardashev Scale Type II. Reshape the cosmos at the scale of galaxies (Type III). Master energy at the level of the universe (Type IV). Even perhaps attain the knowledge to manipulate the universe at will (Type V). Now that would be an adventure for Humanity! Count me in [Kard].

A grand adventure for Humanity: explore and reshape the cosmos! 3.6 Cognitive Liberty

It’s one thing for the surveillance state or surveillance capitalism to monitor our electronic lives, as they do now. We’ve almost come to terms with it, whether we like it or not. But what about monitoring our thoughts? This will be a red line for most people, and rightly so. If our thoughts cannot be private, we risk freedom and personal sovereignty.

This is the concept of “cognitive liberty” or the “right to mental self-determination”: the freedom of an individual to control their own mental processes, cognition, and consciousness. I was happily surprised to discover a book-length exposé of the issue in “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology” by Nita A. Farahany.

A Web3 framing is: “Your keys, your thoughts. Not your keys, not your thoughts”. Web3 points to a potential starting-point too: you hold your keys to your data. And use infrastructure like like Arweave to store your brain data, and Ocean Protocol to manage your brain data. Appendix 5.1 elaborates. But this is only a partial solution; there will be many devilish challenges that need to be worked out. For example: if you hold the keys in your head, will the BCIs see those thoughts too? There’s dozens or thousands of man-years worth of R&D needed here.

It’s hard to understate the importance of cognitive liberty. We need more work on this, asap. I’d love to see more funding to research efforts here, not to mention BCI-acceleration efforts in general.

In an age of BCI, how do we protect our thoughts and retain cognitive liberty? 4. Conclusion

ASI is coming, perhaps in 3–10 years. Humanity needs a competitive substrate, in time for this. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses. That is bci/acc.

The “masses-first” bci/acc variant is to bring non-invasive BCI with killer apps like silent messaging to healthy people first; then to use market momentum to get over the invasive-BCI hump; and finally, to keep growing the power of each human’s bio-stack and silicon-stack brains. Looping this repeatedly, the net result is human superintelligence (HSI).

In perhaps 100 years (or 20) the majority of intelligences will be on a silicon or post-silicon substrate. Some will have human origin, some will have pure AI origin, and some will have a mix. They will all be general; they will all be sovereign; they will all be superintelligent: they are Sovereign General Intelligences (SGIs). They’ll be reshaping and exploring the cosmos, climbing the Kardashev scales.

bci/acc is solarpunk: optimistic and perhaps a little gonzo. It’s e/acc zoomed-in for BCI. And it could be a grand adventure for Humanity.

5. Appendices 5.1 Appendix: Sovereign Web3 Software

This section describes how many state-of-the-art Web3 systems are already sovereign — beholden to no one — and how Web3 capabilities will keep expanding for powerful sovereign agents.

Decentralized Elements of Computing. A blockchain is a network of machines (nodes) that maintains a list of transactions. It’s decentralized: no single entity owns or controls the network. With a decentralized list of transactions, we can then decentralize the elements of computing: storage, compute, and communications. “Web3” is a more accessible term than “blockchain”, but they basically mean the same thing.

Storage of Value. In blockchains, storage comes in two parts: storage of value, and storage of data. We already have a wonderful decentralized store of value, ie “digital gold”: Bitcoin which stores BTC tokens. Released in 2009, it has a market cap > $700B and tens of millions of users. Just as Bitcoin has BTC as its native token, Ethereum has ETH, and so on; each are stores of value. Finally, there are tokens as decentralized scripts on top of a chain (e.g. ERC20). Storage of Data. Smallish amounts of data can live on a chain itself; that’s how value is stored. We also have larger-scale decentralized data storage: Arweave and Filecoin are the leading projects. We have decentralized access control to that data via Ocean Protocol, and decentralized data feeds via Chainlink [LINK]. Compute (Processing). The first really great decentralized compute system was Ethereum which came out in 2015. It runs smart contracts, which are simply small scripts running on decentralized compute hardware. It’s pretty expensive to do compute directly on Ethereum smart contracts, so there are ways that it’s scaling up. These include (a) more powerful “Layer 1” chains like Solana, (b) “Layer 2” chains, especially “Zero-Knowledge Rollup” L2s which enable compute to be off-chain with provable compute results stored on-chain, and (c ) decentralized compute markets like iExec and Golem, and many more recent ones including AI-specialized ones. Communications. Being decentralized networks, all blockchains have an element of communications built in. And, there are multi-chain protocols like Cosmos or Polkadot, and cross-chain protocols like THORchain, CCIP, and Chainflip.

Smart Contracts & Chains are Sovereign. Perhaps surprisingly, every smart contract running on a chain is sovereign 🤯 [Gov]. For example, Uniswap V2 is a popular decentralized exchange. Each Uniswap pool — say ETH/USDT — has its own smart contract. Each of those pools is a robot that just “does its thing”: holding liquidity, giving some USDT for people who bring it ETH, and giving some ETH for people who bring it USDT. There are no humans helping it, it “just runs”, you can’t turn it off, it’s not beholden to any specific individual, organization or jurisdiction. It is sovereign.

Each chain is sovereign too. Each chain is beholden to no one. It’s why Bitcoin can be framed as a life form.

These sovereign smart contracts and chains can do all the usual things: store & manage wealth, store & manage data, do compute, and communicate. Uniswap and Bitcoin answer to no one.

And, they have rights! While no one lobbied for these robots’ rights, and no law was created for these robots’ rights, they have rights nonetheless, because they can manipulate resources without asking. How? because the technology itself allows for it: it’s a dry code, not wet code approach to rights. It’s “your keys, your Bitcoinfor robots themselves. Do you get it yet anon?

AI & Agents for Web3. So far, chain-based robots haven’t been very smart. But this is changing as Web3 capabilities grow. Some examples:

Prediction is the essence of intelligence. Ocean Predictoor is a recent system for prediction feeds, powered by AI prediction bots and consumed by AI trading bots. The feeds themselves are sovereign; the bots can be too. Fetch.ai and SingularityNET are Web3 systems to run decentralized agents (bots). These agents can be sovereign: no one owns or controls.

The above AI * Web3 projects are by OGs in both AI & Web3. Advances in Web3 storage, processing, and communication have helped their capabilities. And the recent explosion in AI interest has brought a large new wave of AI * Web3 projects.

5.2 Appendix: AI Alignment Bingo

In 2022, Rob Bensinger tweeted the following text and image. It’s become a useful (and hilarious) reference in many AI & alignment crowds.

“Bad take” bingo cards are terrible, because they never actually say what’s wrong with any of the arguments they’re making fun of. So here’s the “bad AI alignment take bingo” meme that’s been going around… but with actual responses to the “bad takes”!
5.3 Appendix: Issues on ASI Risk Idea “dumber AIs aligning smarter ones”

This is the approach published by OpenAI in December 2023. The idea is to have a chain of AIs from dumb → smart, where each link is a dumber AI aligning the next-smarter AI.

Here, we elaborate on the issues.

“Chain risk” framing. Each link needs to have crazy-high reliability, which likely isn’t achievable. Probability of failure = 1–(probability of failure of link 1, pfail 1) * (pfail 2) * (pfail 3) * … * (pfail n) * (probability of failure of non-link components); assuming independent pfails. E.g. if there are 5 links in the chain and perfect reliability for non-link components, and you want <1% chance of failure, then each link must have <1e-5 (0.001%) chance of failure. “Over-leverage risk” framing. This can be seen as over-leverage risk too. The 2008 financial crisis illustrates how over-leverage can go badly wrong. In 2008, there was a chain of derivatives on housing mortgages like Credit Default Swaps, which amplified billions into tens of trillions: home mortgage → 10x derivative → 100x derivative → 1000x derivative. Any fluctuation in home mortgages, such as slight changes to interest rates, rippled to 1000x effects downstream. Smartest entity might disagree. Rhetorically, could an ant align a bee → align a mouse → dog → align a chimpanzee → align a human? If you’re the human, would you let this happen? Smartest entity could change rules of weaker layers. It’s the smartest entity not just disagreeing, but actively pushing the other layers to its own benefit. In the 2008 financial crisis, to make more $, the bankers at the top (final link) were highly incentivized to grow the $ volume of mortgages (first link). This resulted in craziness. For example, a strawberry picker husband & wife with < $15,000 combined annual income obtained a loan for a $720,000 house, with no money down. They had no hope of paying back the loan; the chain couldn’t last; the chain broke; the 2008 financial meltdown happened.

To summarize, solving ASI risk with a chain of AIs has great risk on its own.

Acknowledgements

Thank you very much to the following people for review, discussion, and feedback on these ideas and this essay in particular: Jim Rutt, Mark Meadows, Albert Wenger, Lou de Kerhuelvez, Jeff Wilser, Kalin Harvey, Bruce Pon, Joel Dietz, Shan Sundaramahalingam, and Jeremy Sim.

And, thank you to Mark Meadows for the opportunity to share the ideas with NASA, and Lou de Kerhuelvez & Allison Duettman for the opportunity to share with Foresight Institute. Finally, thanks to the e/acc movement for the courage and optimism. (And for inspiration of “bci/acc” label, it’s an improvement on “Bandwidth++”).

Notes

[Spec1999] The fair was Spectrum 1999, held every four years at the University of Saskatchewan’s College of Engineering. Some people skied no better than random; others had 0% error. This was a useful lesson on the high variability among people in BCI accuracy. I found the same thing in experiments on other BCI devices too.

[Tsh2012] The researchers’ tricks included: more sensors, active not passive sensing (visual evoked potentials), maximize rate of neural firing, error correction codes, and AI

[Rutt2024a] Thanks to Jim Rutt for this specific framing. To expand on Jim’s words, lightly edited: “it’s not just 1000x more powerful but qualitatively different. The ASI could actually deeply understand in total detail even the most complex book. That’s very different from how humans create a rough highly compressed representation. Human memories are faulty and low fidelity; machines are not. Clock speed is 1 ms (1e-3 s) for neurons, and sub-nanoseconds (<1e-9 s) for chips. Today meat minds have a huge advantage in parallelization, but that will eventually be solved in silicon.”

[Hyp1989] Many sci-fi novels explore potential relations between humans and ASIs, where they act as gods to humans. The Hyperion Cantos and A Fire Upon the Deep are two prominent examples; there are more.

[Verd2023] “Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI”, Lex Fridman Podcast #407, Dec 29, 2023 https://www.youtube.com/watch?v=8fEEbKJoNbU

The e/acc movement has a second argument : the general idea of deceleration runs against the physics tendency of entropy to shrink over time. On earth, this manifests as evolution in biology (ability to acquire resources and reproduce), and as evolution in human organization (ability to acquire resources and reproduce — capitalism). Even in the highly-unlikely event somehow all the above deceleration efforts were successful, in the medium term, entropy and evolution will route around these anyway.

[Snow2013] Remember, Edward Snowden’s 2013 revelations didn’t stop the goals of PRISM surveillance. Now, USA and its allies simply get the data via big tech companies vs. directly.

[Weng2023] From private conversation with Albert Wenger in late 2023, soon to be public.

[Goer2013] This is an oft-repeated example by Ben Goertzel, from 2013 and likely earlier.

[Rutt2024b] Thanks to Jim Rutt for this idea, and inspiration for the wording. (Private correspondence.)

[Rutt2024c] Thanks to Jim Rutt to help develop this idea.

[BciTech] These include functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), transcranial stimulation like TMS (magnetic) and tFUS (focused ultrasound), and more. Each has its own strengths and weaknesses. bci/acc may use any of these. Endovascular BCI has a particularly promising tradeoff of minimally-invasive yet high-signal.

[Rutt2024d] A great question from Jim Rutt, lightly edited: “While the market is an excellent hill climber, there is no guarantee at all that it finds global maxima. Maybe there ought to be a political/social layer. Is this what humanity wants?”

[Kard] Once bci/acc unbounds humanity 100% from biological constraints, the “bci” part is less important. bci/acc generalizes back into e/acc. These Kardashev-scale goals are 100% in-line with the goals of e/acc.

Also: If we were bound by our bio stack, there would have been a hitch. The nearest star beyond our sun is Proxima Centauri. It takes 4.3 years to get there if you’re traveling at light speed. OK, doable. However, if you travel at Voyager speed — the man-made device that’s gone the fastest in space so far — it will take 73,000 years. That’s a time scale 10x larger than the time since the Greeks. Less practically doable. As sci-fi author Charlie Stross has said “Sending canned primates was never going to end happily”. Good thing we’ve unbound ourselves from our bio stack!

[LINK] Chainlink got its start doing decentralized data feeds. As it’s grown, its scope has expanded to much more.

[Gov] Assuming no governance, which is true for a lot of smart contracts. Uniswap V2 has no governance, though V3 does.

bci/acc: A Path to Balance AI Superintelligence was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thales Group

Adani Airport Holdings Limited and Thales Forge Strategic Partnership to Improve Airport Operations and Passenger Experience in India

Adani Airport Holdings Limited and Thales Forge Strategic Partnership to Improve Airport Operations and Passenger Experience in India prezly Wed, 10/09/2024 - 08:01 This strategic collaboration includes a fully integrated airport solution provided by Thales based on three pillars: smart airport security, biometric passenger journey, and operations efficiency; addressing all the ai
Adani Airport Holdings Limited and Thales Forge Strategic Partnership to Improve Airport Operations and Passenger Experience in India prezly Wed, 10/09/2024 - 08:01 This strategic collaboration includes a fully integrated airport solution provided by Thales based on three pillars: smart airport security, biometric passenger journey, and operations efficiency; addressing all the airports operated by Adani Airport Holdings Limited (AAHL) in India. The overall solution encompasses Thales' Fly to Gate, deployed in early 2024 to provide passengers with touchless biometric solutions for DigiYatra1, and its Airport Operation Control Centre (APOC), which will be set up soon to enhance management and security at AAHL's airports. All in all, Thales’ technologies enable AAHL to revolutionise air travel in India by efficiently and securely managing complex airport operations while improving travel experience for passengers in full compliance with privacy regulations.

Adani Airport Holdings Limited (AAHL), the largest private airport operator in India, and Thales, a global leader in advanced technologies, today announced a strategic partnership to revolutionise AAHL’s international airport operations and passenger experience across the country. Under this partnership, Thales has already deployed the Fly to Gate solution at seven of AAHL-managed airports2 in India, streamlining and enhancing the journey for millions of travellers since early 2024. Extending this collaboration, AAHL has now awarded Thales an additional contract to deploy at all its airports, the innovative Airport Operation Control Centre (APOC) to optimise overall airport management and enhance passenger experience securely.

 

The seven airports operated by AAHL are currently equipped with DigiYatra powered by Thales’ Fly to Gate solution built on the responsible use of advanced facial recognition technology as a secure passenger ID proof. The pre-enrolled passengers can then benefit from a smooth and trusted way to speed up their journey, eliminating the need to show an ID document and the boarding pass at each check point (from check-in to boarding). Reducing passengers processing time up to 30% at these airports, this seamless integration of responsible biometric solutions (cf Thales TrUE Biometrics) aligns with the Indian government's vision of a digital India.

In addition, Thales has been awarded to work on the design, integration, and implementation of an end-to-end APOC solution for all AAHL-managed airports. This cloud-based ‘Smart Digital Platform’ will centrally host all the necessary applications to improve overall airport management, security, and passenger experience. The innovative APOC platform collects operational data from integrated airports sub-systems and sensors, while complying with standards of privacy. This data is then intelligently processed using automation, big data analytics, and robust artificial intelligence (AI) algorithms. The solution which will be deployed soon, will anticipate, and reduce unplanned resource shortages, hence increasing predictability and global efficiency.

"We are delighted to strengthen our partnership with Adani Airport Holdings Limited to bring innovative technology solutions to revolutionise airport operations and the passenger experience in India. Our Fly to Gate biometric solution for DigiYatra and the smart Airport Operation Control Centre (APOC) will enable AAHL to streamline operations and also ensure a secure and simplified journey for millions of passengers. Together, we are committed to support India in its vision of becoming the largest aviation market in the world by 2047," said Mr. Ashish Saraf, VP and Country Director for India, Thales.

1 DigiYatra is a Ministry of Civil Aviation, Govt. of India led initiative to make air traveller’s/ passenger’s journey seamless, hassle-free and Health-Risk-Free. The DigiYatra process uses the single token of face biometrics to digitally validate the Identity, Travel, Health or any other data that is needed for the purpose of enabling air travel.

2 Mumbai, Ahmedabad, Guwahati, Jaipur, Lucknow, Mangaluru and Thiruvananthapuram.

/sites/default/files/prezly/images/Photo%20AAHL.jpg Contacts Vanessa Viala - Digital Identity & Security Press Officer Pawandeep Kaur 09 Oct 2024 Digital Identity and Security Government Border Control Type Press release Structure Digital Identity and Security India Adani Airport Holdings Limited (AAHL), the largest private airport operator in India, and Thales, a global leader in advanced technologies, today announced a strategic partnership to revolutionise AAHL’s international airport operations and passenger experience across the country. Under this partnership, Thales has already deployed the Fly to Gate solution at seven of AAHL-managed airports2 in India, streamlining and enhancing the journey for millions of travellers since early 2024. Extending this collaboration, AAHL has now awarded Thales an additional contract to deploy at all its airports, the innovative Airport Operation Control Centre (APOC) to optimise overall airport management and enhance passenger experience securely. prezly_695284_thumbnail.jpg Hide from search engines Off Prezly ID 695284 Prezly UUID 904fffde-e7f0-4087-9b07-24d000cb5678 Prezly url https://thales-group.prezly.com/adani-airport-holdings-limited-and-thales-forge-strategic-partnership-to-improve-airport-operations-and-passenger-experience-in-india Wed, 10/09/2024 - 10:00 Don’t overwrite with Prezly data Off

Tuesday, 08. October 2024

TBD on Dev.to

TBD x Hacktoberfest

With October blazing through, we're greeted by pumpkin spices, the aroma of fall leaves drifting in the rain, and of course, the much-anticipated Hacktoberfest. Whether you're a seasoned contributor or new to open source, there's something for everyone. 🎉 We're Participating in Hacktoberfest 2024! We have several projects with a variety of issues that we'd l




With October blazing through, we're greeted by pumpkin spices, the aroma of fall leaves drifting in the rain, and of course, the much-anticipated Hacktoberfest. Whether you're a seasoned contributor or new to open source, there's something for everyone.

🎉 We're Participating in Hacktoberfest 2024!

We have several projects with a variety of issues that we'd love your contributions for! For each issue that's merged, you'll earn points towards the TBD Hacktoberfest Leaderboard. Winners will receive exclusive TBD Hacktoberfest 2024 swag!

We're kicking off Hacktoberfest with more events:

October 10: Twitter Space - Hacktoberfest Rust Projects October 10: Exploring an AI-Powered GitHub Action

Be sure to add them to your calendar.

📌 What is Hacktoberfest?

Hacktoberfest is a month-long (October) celebration of open source software. It's sponsored by DigitalOcean, GitHub, and other partners. Check out Hacktoberfest's official site for more details and to register. Registration is from September 23 - October 31.

📂 Dive into TBD's Participating Projects

We included a wide variety of projects and issues for Hacktoberfest 2024. Each of our participating repos has a Hacktoberfest Project Hub, which contains all issues you can pick up with the hacktoberfest label. For easy reference, repos with multiple projects will have multiple project hubs.

Explore our participating repos below and see where you can make an impact:

developer.tbd.website

Languages: MDX, JavaScript, CSS, Markdown Project Description: Docusaurus instance powering the TBD Developer Website (this site). Links: Hacktoberfest Project Hub | Contributing Guide

web5-js

Language: TypeScript Description: The monorepo for the Web5 JS TypeScript implementation. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub: Protocol Explorer | Hacktoberfest Project Hub: General | Contributing Guide

web5-rs

Language: Rust Description: This monorepo houses the core components of the Web5 platform containing the core Rust code with Kotlin bindings. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub | Contributing Guide

dwn-sdk-js

Language: TypeScript Description: Decentralized Web Node (DWN) Reference implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DWA Starter

Language: JavaScript Description: Decentralized Web App (DWA) starter collection. Links: Hacktoberfest Project Hub: VanillaJS | Hacktoberfest Project Hub: Vue | Contributing Guide

DIDPay

Languages: Dart Description: Mobile app that provides a way for individuals to interact with PFIs via tbDEX. Links: Hacktoberfest Project Hub | Contributing Guide

DID DHT

Language: Go Description: The did:dht method and server implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DCX

Languages: TypeScript, JavaScript Description: A Web5 Protocol for Decentralized Credential Exchange. Links: Hacktoberfest Project Hub | Contributing Guide

Goose Plugins

Language: Python Description: Plugins for Goose, an AI developer agent that operates from your command line. Links: Hacktoberfest Project Hub | Contributing Guide

Fllw, Aliased

Languages: TypeScript, JavaScript Description: A reference app for building Decentralized Web Apps. Links: Hacktoberfest Task: Fllw | Hacktoberfest Task: Aliased

Hot Tip
Not a coder? No worries! developer.tbd.website has tons of non-code related issues up for grabs.

📝 Guide to TBD x Hacktoberfest 2024

✅ Topic Check: Contribute to projects that have the hacktoberfest label. This ensures your PR counts towards the official Hacktoberfest prizes.

🏷️ Label Insights:

Start with an issue labeled hacktoberfest and comment ".take" to assign yourself the issue. After submitting a PR and having it approved, the PR will be labeled hacktoberfest-accepted and you'll receive points on our leaderboard and credit towards the global Hacktoberfest 🎉 If your PR is marked with a spam or invalid label, re-evaluate your contribution to make it count.

🥇 Code and Conduct: Adhere to our code of conduct and ensure your PR aligns with the repository's goals.

🫶 Community Support: Engage with fellow contributors on our Discord for tips for success from participants!

🆘 Seek Help: If in doubt, don't stress! Connect with the maintainers by commenting on the issue or chat with them directly in the #🎃┃hacktoberfest channel on Discord.

🎁 Leaderboard, Prizes and Excitement

Be among the top 10 with the most points to snag custom swag with this year's exclusive TBD x Hacktoberfest 2024 design! To earn your place in the leaderboard, we have created a points system that is explained below. As you have issues merged, you will automatically be granted points.

💯 Point System Weight Points Awarded Description 🐭 Small 5 points For smaller issues that take limited time to complete and/or don't require any product knowledge. 🐰 Medium 10 points For average issues that take additional time to complete and/or require some product knowledge. 🐂 Large 15 points For meaty issues that take a significant amount of time to complete and/or possibly require deep product knowledge. 🏆 Prizes The top 10 contributors with the most points will be awarded TBD x Hacktoberfest 2024 swag from our TBD shop. The top 3 contributors in our top 10 will be awarded very limited customized TBD x Hacktoberfest 2024 swag with your github username on it. Stay tuned to our Discord for the reveal!

Keep an eye on your progress via our Leaderboard.

🎙️ Livestreams & Office Hours

Dive into our jam-packed Hacktoberfest schedule! Whether you're just here for fun or are focused on learning everything you can, we've got you covered:

Every Tuesday, Community Office Hours - Join us every Tuesday at 1pm ET for the month of October, where we will go over PR reviews, live Q&A, and more. This event occurs on Discord.

Twitter Space: Hacktoberfest Rust Projects - Join Staff Developer Advocate @blackgirlbytes & Software Engineer @kendallweihe this Thursday at 12pm ET, where you can learn about our core Rust SDK with Kotlin bindings and contributions we're seeking for this project. This event will be live on our Twitter.

Exploring an AI-powered GitHub Action - Join Head of Engineering Michael Neale & Staff Developer Advocate @blackgirlbytes this Thursday at 5pm ET, to learn more about an AI-powered action made by Goose, an AI developer agent that operates from your command line.

Live Events Calendar - Keep tabs on our Discord or developer.tbd.website for our future events & sneak peeks - we're always cooking up something new!

📚 Resources for First-Time Contributors 📖 How to Contribute on GitHub 🛠 Git Cheatsheet 🔍 Projects Participating in Hacktoberfest

Happy hacking and cheers to Hacktoberfest 2024! 🎉


Safle Wallet

Exploring the Safle Community Explorer Carnival: Your Cosmic Adventure Begins! ✨

Welcome, brave explorer, to the Safle Community Explorer Carnival! This is your chance to embark on a cosmic journey across the Web3 universe. Here, you’ll take on daring challenges, collect XP, and unlock new levels of adventure. With 8 thrilling missions waiting for you, it’s time to dive into the first two that will kick off your journey! 🏆 Rewards and Leaderboard 🌟 Compete on the leader

Welcome, brave explorer, to the Safle Community Explorer Carnival! This is your chance to embark on a cosmic journey across the Web3 universe. Here, you’ll take on daring challenges, collect XP, and unlock new levels of adventure. With 8 thrilling missions waiting for you, it’s time to dive into the first two that will kick off your journey!

🏆 Rewards and Leaderboard 🌟
Compete on the leaderboard! Remember, this mission is a marathon and not a sprint. 🏃‍♂️💨 Winners will receive a share of a $15,000 reward pool in USDT, SAFLE, and RBTC! 💰🚀
Connecting Your Safle Wallet to Begin Your Journey.

Before you can start your adventure, you’ll need to link your Safle Wallet to access the Safle universe. Here’s how to get started:

Download the Safle Wallet & Get Safle ID — your decentralized identity in the Web3 cosmos.
— For IOS : Safle Mobile Wallet
— For Android : Safle Mobile Wallet

🌟 Now that you have what you need to begin your mission, here is how you step onto the Safle Community Carnival Spaceship! 🚀✨

Visit www.safle.com, and Click on Safle Carnival at the top right corner of the homepage. From your Safle mobile Wallet, scan the Wallet Connect QR code using in app QR scanner, or if participating from mobile browser, follow the instructions below. 👇🏻 Select Wallet Connect, click on QR generator and copy QR code.On you Safle Mobile Wallet, paste the QR, allow the device to pasteApprove the transaction and we have LIFT OFF!

🚀 Your adventure kicks off with Mission 1, where you’ll set the foundation for your cosmic legacy! 🌌 The goal is simple: Signal Your Beacon of Alliance by completing the first challenge. This involves showcasing your support for the Safle Community by engaging with your social network and getting recognized as an official explorer. 🌟

Location: www.safle.com XP: 50XP
Task: Signal the Beacon of Alliance
To enroll officially in the Safle Community Explorer Carnival, you’ll need to Re-tweet and tag. It’s the first step in leaving your mark on the galaxy and joining an elite group of explorers. Here’s how to do it:
Here is how you navigate through your mission,
1. Visit www.safle.com.
2. Click on Safle Carnival at the top right corner of the homepage.
3. Connect your Safle Wallet using Wallet Connect.
From the Safle mobile Wallet, scan the Wallet Connect QR code using in app QR scanner.Select Participate : Signal Your Beacon of Alliance.Click on Signal The Beacon.Connect your Twitter handle.Click on Tweet.Click on Post to officially signal your beacon.Click on share tweet and Copy the tweet link. Paste the link in Step two and Click on Verify.After verifying, click on repost.Get Verified.Click on XP Points and Claim your XP! & off to Mission 2. Mission 2: Pledge the Cosmic Vow

Now that you’ve established your alliance, it’s time to make the next leap — pledging your Cosmic Vow. In Mission 2, you’ll cement your place in the Safle universe by joining the Safle Discord server and get bonus XP by referring friends to the journey.

Total XP: 100XP Pledge the Cosmic Vow
Task 1: Officially Saflenaut
In this mission, you take an oath to guide and lead your fellow explorers. Whether you’re new or seasoned, completing this mission ensures your ascension to the rank of Saflenaut.
XP: 40 XP
Steps:
Visit www.safle.com
Click on Safle Carnival at the top right corner of the homepage.
Select Participate : Officially Saflenaut.
Select Participate : Officially Saflenaut.Click on Discord.Join Safle Discord Server.After successfully joining Safle Discord, click on verify to authenticate your participation.Verify your handle on Safle Discord server.And Voilla! You’ve completed the first part of the mission, & gathered XP automatically.
Task 2 : Bring the Beacon Bearers! ( Referral )
Refer a friend and once they Get SafleID & complete Task 1 (Join and Verify Discord), come back to the carnival page to claim your Bonus XP for helping grow the Safle community!
Bonus XP: 60XP
Steps:
Click on Referral.Create your custom referral code , Click on Activate and Share with your Beacon Bearer.Sign in with your Safle Wallet, confirm the sign transaction and enter your security pin to authenticate.Activate your referral code, & share it with your friend.Enter your friend’s custom referral code and click on Submit.Collect your XP!!!

Congratulations!! You have successfully begun your journey as a Saflenaut, Keep an eye out for upcoming missions on Safle twitter.

🌌 Your Cosmic Adventure Awaits! 🚀

As you complete missions and gather XP, you’ll level up and unlock more exciting rewards. Stay tuned — more missions will be revealed soon, offering new ways to explore the Web3 universe. Whether you’re claiming your spot among the stars or helping fellow explorers, the Safle Community Explorer Carnival is just the beginning of your interstellar adventure. 🌠

Get ready to rise, Saflenaut! Your destiny in the Web3 cosmos is waiting! 🌟

👉Safle Discord

👉Safle Twitter


Thales Group

Thales announces the distribution of an interim dividend and the reduction of its share capital by cancellation of treasury shares

Thales announces the distribution of an interim dividend and the reduction of its share capital by cancellation of treasury shares prezly Tue, 10/08/2024 - 18:04 The Board of directors of Thales (Euronext Paris: HO), meeting on 8 October 2024 under the chairmanship of Patrice Caine, decided: to distribute an interim ordinary cash dividend of €0.85 per share for the current 2024
Thales announces the distribution of an interim dividend and the reduction of its share capital by cancellation of treasury shares prezly Tue, 10/08/2024 - 18:04

The Board of directors of Thales (Euronext Paris: HO), meeting on 8 October 2024 under the chairmanship of Patrice Caine, decided:

to distribute an interim ordinary cash dividend of €0.85 per share for the current 2024 financial year; and to reduce the share capital of Thales S.A. by cancelling 4,268,227 treasury shares held in registered form, representing 2.03% of its share capital, with immediate effect, upon the authorisation granted by the extraordinary general meeting of May 10, 2023.

Distribution of an interim ordinary cash dividend of €0.85 per share for the current 2024 financial year.
​The ex-dividend date will be 3 December 2024 and the interim dividend will be paid on 5 December 2024.

Reduction of share capital by cancellation of treasury shares

The 4,268,227 treasury shares held in registered form and about to be cancelled were bought back between February 15, 2023 and March 26, 2024 included. They represent the balance of shares acquired under share buyback program announced on March 3, 2022 and not yet cancelled.

As a consequence, the Board of directors acknowledged that the share capital of Thales now amounts to €617,825,739 divided into 205,941,913 shares with a nominal value of €3. This operation has no impact on Thales’ consolidated accounts nor on the net earnings per share.

The information on the total number of voting rights and shares as well as the shareholding structure will be updated accordingly on the website:

- ​ Section “Monthly statement on share capital and voting rights”: https://www.thalesgroup.com/en/investor/regulated-information

- ​ Section “Share and shareholding”: ​ https://www.thalesgroup.com/en/investor/retail-investors/share-and-shareholding

/sites/default/files/prezly/images/sans%20A-1920x480px_47.jpg Documents [Prezly] Thales announces the distribution of an interim dividend and the reduction of its share capital by cancellation of treasury shares.pdf Contacts Head of Media Relations Alexandra Boucheron - Thales, Analysts/Investors 08 Oct 2024 Type Press release Structure Investors The Board of directors of Thales (Euronext Paris: HO), meeting on 8 October 2024 under the chairmanship of Patrice Caine, decided: prezly_695952_thumbnail.jpg Hide from search engines Off Prezly ID 695952 Prezly UUID 0d741845-f580-4a29-9d75-febe3d053579 Prezly url https://thales-group.prezly.com/thales-announces-the-distribution-of-an-interim-dividend-and-the-reduction-of-its-share-capital-by-cancellation-of-treasury-shares Tue, 10/08/2024 - 20:00 Don’t overwrite with Prezly data Off

Spruce Systems

Meet the SpruceID Team: Jacob Healy

Jacob leverages his experience in managing complex software implementations to drive successful project execution, working with his team to transform ideas into impactful solutions for clients.
Name: Jacob Healy
Team: Product Delivery
Based in: Arvada, Colorado About Jacob

My journey began with math and being interested in problems, but no problem in particular.  I was fortunate to find a great company solving important problems early in my career, where I could dive into development and IT implementation for government agencies.  That led to managing large-scale software implementations, where I landed on the particular problem of just how to get complex things done - process development and building strong teams.

I eventually decided to move on to SpruceID as the Product Delivery Lead because I think that digital identity is a particular problem that needs to be solved, and SpruceID is able to do it, do it right, and affords me the opportunity to contribute significantly and quickly to that goal.

Can you tell us about your role at SpruceID?

At SpruceID, I tackle the gap between “we should do this” and “we did this.” As the one ultimately accountable for the success of project execution across the organization, I have the opportunity to wear many hats, the privilege of working closely with everyone in the organization, and the honor of collaborating directly with and delivering value to all of our clients.

What do you find most rewarding about your role?

The most rewarding aspect of my role is seeing the tangible impact of our work on government agencies and, ultimately, the public. Implementing innovative digital credentialing solutions that streamline processes and enhance security gives me a great sense of accomplishment. Additionally, building, leading, and collaborating with a high-performing team that consistently delivers successful outcomes is incredibly fulfilling.

What are some of the most important qualities for someone in your role to have, in your opinion?

Adaptability, a steady hand, and the ability to learn quickly and hold context.  Seeing the forest through the trees is critical for overall success. It's important to navigate complex projects, engage effectively with diverse stakeholders, and make informed decisions that balance various priorities, often in real-time. Being able to hold the line and understand both technical and business aspects is essential. 

What are you currently learning, or what do you hope to learn?

It feels like everything, all the time. There is always something new in the digital identity space but being in such a fast moving startup keeps me on my toes in all ways.  I learn something new everyday from my colleagues and partners about scaling teams, processes, and tech.

What has been the most memorable moment for you at SpruceID so far?

There have been many, but one that stands out was kicking off our work with the State of Utah. It was really fun to go to Salt Lake City and meet with some great technologists, talk about verifiable digital credentials, and explore how the Department of Natural Resources could use them for off-highway Vehicle Permits. And then seeing it all launch later, of course. 

What's the best piece of advice you’ve received since starting here?

The best piece of advice I've received since starting at SpruceID is to embrace a growth mindset.  The special twist, or magnification, a startup has on this has given me a new point of view.  I've learned that viewing challenges as opportunities to learn rather than obstacles encourages innovation and resilience. This perspective has empowered me to tackle complex projects with confidence and to continuously develop both personally and professionally.

What is some advice that you’d give to someone in your role who is early in their career?

Embrace continuous learning and remain adaptable. The technology landscape is always evolving, so staying curious will help you navigate changes effectively. Building strong relationships with your team and stakeholders is also key to successful project delivery.  Maybe more than anything though, pay attention.  You never know when that random conversation or piece of information will be the exact thing you need.  Good enough and great outcomes are often separated by Slumdog Millionaire Moments. 

How do you define success in your role, and how do you measure it?

The easy answer here is to make sure we meet the objective, have the impact we intended, and take into account our strategic goals.  I like to add some qualifiers, though, the ends do not always justify the means.  How does the team feel? How does the client feel? Did we do right by the customer and ourselves?  If yes, then success. 

Fun Facts

What do you enjoy doing in your free time?: Raising my kids, trying to make the most of every minute.

What is your favorite coding language (and why?): My favorite coding language is whatever gets the job done when and how it needs to be done. I defer this to the wiser, more engineering-focused minds. 

If you could be any tree, what tree would you be and why?: If I could be any tree, I would be an Ent (from the world of Tolkien). Ents are wise guardians of the forest—embodying strength, resilience, and a deep sense of responsibility. They are not just passive observers; they take action to protect what they care about. Similarly, I strive to be as thoughtful of a leader as I can, who actively works to safeguard and nurture the people I work with and projectsI work on, pursuing growth and harmony.

Interested in joining our team? Check out our open roles and apply online!

Apply to Join Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Endpoint Protection Detection & Response (EPDR)

by John Tolbert Malware has been a constant threat to all organizations and end users for decades. There are no signs of the threat of malware abating. KuppingerCole recommends the use of Endpoint Protection Detection and Response (or eXtended Detection and Response) solutions for every size and type of organization. Use this Buyer's Compass as a guide to selecting the right solution for your comp

by John Tolbert

Malware has been a constant threat to all organizations and end users for decades. There are no signs of the threat of malware abating. KuppingerCole recommends the use of Endpoint Protection Detection and Response (or eXtended Detection and Response) solutions for every size and type of organization. Use this Buyer's Compass as a guide to selecting the right solution for your company.

Civic

Could Reusable Credentials Be Right For Your dApp?

We’re proud to announce Reusable Credentials using Civic Pass. Ultimately, Reusable Credentials will allow for a massive UX simplification, making signing up for services or completing transactions far easier for users. The efficiencies may also translate into cost savings for businesses. With Reusable Credentials, users verify their personal information once, and then that information is […] Th

We’re proud to announce Reusable Credentials using Civic Pass. Ultimately, Reusable Credentials will allow for a massive UX simplification, making signing up for services or completing transactions far easier for users. The efficiencies may also translate into cost savings for businesses. With Reusable Credentials, users verify their personal information once, and then that information is […]

The post Could Reusable Credentials Be Right For Your dApp? appeared first on Civic Technologies, Inc..


SC Media - Identity and Access

Hackers still prefer credentials-based techniques in cloud attacks

Despite enterprises' increased use of multi-factor authentication, phishing techniques like adversary-in-the-middle attacks allow attackers to bypass this security feature and steal credentials.

Despite enterprises' increased use of multi-factor authentication, phishing techniques like adversary-in-the-middle attacks allow attackers to bypass this security feature and steal credentials.


Ocean Protocol

Formula 1 Prediction Challenge: 2024 Mexico Grand Prix

Overview The 2024 Mexico Grand Prix Prediction Challenge invites participants to build machine learning models that predict vital aspects of Formula 1 pit stop strategies. Using historical data from the 2018–2023 Mexico Grand Prix and race data from the 2024 season, participants will analyze variables such as lap times, stint numbers, tire compounds, and pit stop timing. Participants are also enc
Overview

The 2024 Mexico Grand Prix Prediction Challenge invites participants to build machine learning models that predict vital aspects of Formula 1 pit stop strategies. Using historical data from the 2018–2023 Mexico Grand Prix and race data from the 2024 season, participants will analyze variables such as lap times, stint numbers, tire compounds, and pit stop timing. Participants are also encouraged to use their own data sources and pre-race variables such as weather conditions, track temperature, or any other relevant factors they deem important. Accurate predictions of stints, tire compounds, and average lap times are essential for success.

Time-series data, like lap times and sector splits, combined with categorical data (e.g., tire compounds: Soft, Medium, Hard), will help participants build models that predict key elements of pit stop strategies. Models must account for various race conditions, with evaluation based on metrics such as Mean Squared Error (MSE) for lap time and tire compound predictions.

Objectives

The goal is to develop machine learning models to predict the number of stints, tire compounds for each stint, laps per stint, and average lap times. Participants will leverage both the provided datasets and any additional data sources they choose to incorporate, such as pre-race weather, track temperature, other relevant factors, and historical race information and race-specific conditions.

The models should handle varying race strategies that evolve due to factors like tire degradation, weather changes, or safety car interventions. Evaluation will focus on the accuracy of predictions, with the Ocean team testing the models post-race to assess their performance.

Data

Participants will have access to two primary datasets: historical race data from the 2018–2023 Mexico Grand Prix and race data from the 2024 season. These datasets include over 30 variables: lap times, stint numbers, tire compounds, pit stop timings, and driver positions. In addition, participants may use their own data sources and incorporate pre-race variables, including weather forecasts, track temperature, or any other external data relevant to the race.

The dataset includes continuous variables like lap times and sector splits, as well as categorical variables like tire compounds and stint numbers. The data allows models to study pit stop strategies and performance over time, improving robustness and generalization across different race conditions and strategies.

Mission

The mission is to develop machine learning models that accurately predict pit stop strategies, including the number of stints, tire compounds for each stint, laps per stint, and average lap times. Predictions will be based on pre-race data such as starting grid positions, weather forecasts, track temperature, and historical race data from previous Mexico Grand Prix events.

Flexibility and adaptability to race dynamics — like tire degradation, changing weather, and unforeseen race events — are key. The goal is to achieve high accuracy and demonstrate the ability to generalize predictions across varying race conditions and scenarios.

Rewards

The $10,000 prize pool will be distributed among the top 10 performers:

Prize pool rewards and point distribution

Participants will also earn points toward the 2024 championship. Accumulating points correlates with increased rewards, as seen in the 2023 Championship, where top performers received an additional $10 for each point earned throughout the year.

Opportunities

This challenge is not just about winning rewards, and it’s about enhancing your skills in advanced data science techniques such as regression analysis, time series modeling, and classification algorithms. By applying these techniques to real-world Formula 1 data, you’ll learn how to analyze complex datasets, predict pit stop strategies, and derive actionable insights that inform competitive decision-making. This experience will prepare you for roles in sports analytics and other data-driven industries, equipping you with practical expertise in race strategy analysis.

How to Participate

Are you ready to join us on this quest? Whether you’re a seasoned data pro or just starting, there’s a place for you in our vibrant community of data scientists. Let’s explore and discover together on Desights, our dedicated data challenge platform. The challenge runs from October 3 until October 24, 2024, 13:00 UTC. Click here to access the challenge and become part of our data science community.

Community and Support

To engage in discussions, ask questions, or join the community conversation, connect with us on Ocean’s Discord channel #data-science-hub, the Desights support channel #data-challenge-support.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord — or track Ocean’s progress on GitHub.

Formula 1 Prediction Challenge: 2024 Mexico Grand Prix was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Global Remote Access for the Modern Organization

by Anne Bailey This KuppingerCole Whitepaper explores the challenges of managing third-party and remote access in today’s cloud-centric work environments. As organizations increasingly rely on external contractors, vendors, and other third parties, they face growing complexity in administering secure access to systems and resources. This paper emphasizes the need for robust access controls, highli

by Anne Bailey

This KuppingerCole Whitepaper explores the challenges of managing third-party and remote access in today’s cloud-centric work environments. As organizations increasingly rely on external contractors, vendors, and other third parties, they face growing complexity in administering secure access to systems and resources. This paper emphasizes the need for robust access controls, highlights the limitations of traditional technologies like VPNs and VDIs, and discusses the advantages of fit-to-purpose solutions like ARCON’s Global Remote Access, which offers secure and flexible access to both internal and external users.

Innopay

Mariane ter Veen representing INNOPAY at exclusive European Data Summit in Italy

Mariane ter Veen representing INNOPAY at exclusive European Data Summit in Italy from 08 Oct 2024 till 10 Oct 2024 Trudy Zomer 08 October 2024 - 13:34 Italy We are proud to announce that Mariane ter Veen, Director of Data Sharing a
Mariane ter Veen representing INNOPAY at exclusive European Data Summit in Italy from 08 Oct 2024 till 10 Oct 2024 Trudy Zomer 08 October 2024 - 13:34 Italy

We are proud to announce that Mariane ter Veen, Director of Data Sharing at INNOPAY, is speaking at the prestigious European Data Summit in Italy, taking place from 8 to 10 October. Each year, this invitation-only event organised by Konrad Adenauer Stiftung – a leading EU think tank in digital innovation – gathers key policymakers, civil society leaders and innovators to shape the digital agenda for the upcoming EU mandate. 

Mariane was selected as a speaker for her thought leadership on digital sustainability, which is a crucial topic for the EU as it defines its future digital policies. The summit provides a unique platform for Marianne to advocate for digital sustainability to become a guiding principle in the next EU mandate. Her participation underscores the recognition of her work in supporting responsible data sharing and sustainable digital transformation across Europe.

Stay tuned for more updates as Mariane lobbies for a future in which digital sustainability drives progress and innovation within the EU’s digital landscape.

For more information about the event, visit the event site.


This week in identity

E58 - Microsoft SFI / Okta SIC / Funding for Apono, Hydden and P0 Security

Summary In this episode, Simon and David Mardy discuss the rapidly evolving landscape of identity security, highlighting significant trends, initiatives from major tech companies, and the importance of cyber resilience. They explore the recent funding rounds for startups in the identity space, emphasizing the need for innovative solutions to address ongoing challenges in identity governance and

Summary

In this episode, Simon and David Mardy discuss the rapidly evolving landscape of identity security, highlighting significant trends, initiatives from major tech companies, and the importance of cyber resilience. They explore the recent funding rounds for startups in the identity space, emphasizing the need for innovative solutions to address ongoing challenges in identity governance and access management. The conversation underscores the critical role of identity security in today's digital business environment and the necessity for organizations to adapt to emerging threats.


Keywords

identity security, access management, cyber resilience, Microsoft, Okta, funding rounds, identity governance, PAM, IGA, cybersecurity


Links

Microsoft Secure Future Initiative Okta Secure Identity Commitment Apono $15m funding Hydden $4m funding P0 Security $15m funding




KuppingerCole

Nov 13, 2024: Cloud Backup for AI Enabled Cyber Resilience

Organizations and society have become dependent upon digital services which has increased the business impact of cyber threats and hence the need for cyber resilience. Organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur. Data backup solutions are an essential element of
Organizations and society have become dependent upon digital services which has increased the business impact of cyber threats and hence the need for cyber resilience. Organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur. Data backup solutions are an essential element of every organization’s cyber resilience plan.

Dark Matter Labs

TreesAI’s Heat-sensing collaboration

“Heat is the gravest and most urgent climate risk driver for human health. At greatest risk are specific population groups, such as …in areas with a strong urban heat island effect or with inadequate access to cooling.” In this blog we launch our Heat-Sensing collaboration, with Vaiv Company and the Information Systems Intelligence Lab at Yonsei University, made possible thanks to Innovate U

“Heat is the gravest and most urgent climate risk driver for human health. At greatest risk are specific population groups, such as …in areas with a strong urban heat island effect or with inadequate access to cooling.”

In this blog we launch our Heat-Sensing collaboration, with Vaiv Company and the Information Systems Intelligence Lab at Yonsei University, made possible thanks to Innovate UK’s UK and South Korea data driven urban innovation Bi-Lateral CR&D competition. 1. Nature Provides the Solution: Tackling Heat with Green Cities

As our cities swell and concrete stretches wider, the threats of heatwaves grow, increasingly cornering us into sweltering urban jungles. Imagine this: over 30% of the global population now faces deadly heat extremes for nearly a month each year. In Seoul alone, a study highlighted a staggering 8.4% spike in mortality rates during such sweltering times. In the UK, the summer 2022 heat periods were associated with 2,985 deaths in England alone. The risks are especially severe for the most vulnerable among us — women, the elderly, and those without the financial means to create internal thermal comfort.

But what if our cities could breathe and cool themselves naturally? Urban parks, green roofs, and tree-lined streets are nature’s air conditioners, significantly reducing urban heat. These aren’t just oases of cool; they’re our frontline defence. They shade our pavements, lower surrounding temperatures, and purify our air, making our urban spaces not just bearable, but liveable.

Image Credit: Raed Mansour

This builds off our work to date, developing the LBS platform (see the next section), building functionality so the tool is not for not just analysing, but managing risks by empowering citizens via generative AI. Alongside Vaiv, we are partnering with the Information Systems Intelligence Lab at Yonsei University, a smart city and living lab demonstration that explores how to address social issues with open civic participatory methodologies - thanks to a recent Innovate UK x Kaia grant — the

2. Risk analysis: Location-Based Scoring in Songpa District in Seoul Image 1. Cause-effect relationships among risk components like hazard, exposure, and vulnerability

To enhance decision-making capabilities, the Trees AI team at Dark Matter Labs is supporting municipalities to perform risk-based vulnerability assessments — what we call Location-Based Scoring (LBS). We have implemented this in Songpa, Seoul. By streamlining various remote, weather, institutional, and citizen sensing data, the LBS data modelling framework sets up a comparative spatial assessment of the level of risks from heat island effect in citizen health and wellbeing. For a more detailed exploration of the LBS methodology, check out our related work in Stuttgart: TreesAI is implementing location-based scoring in Stuttgart.

Image 2. Input data aggregated via a geometric weighted mean to generate heat risk index

Songpa District in Seoul, South Korea, was selected for its high population density and varied urban fabric, presenting an ideal setting to study and tackle the Urban Heat Island Effect (UHIE). The detailed analysis is enabled by a high spatial resolution of 100x100 metres, essential for crafting effective hyper-local climate adaptation strategies. However, the absence of street-level demographic data, particularly concerning age and income, poses challenges in precisely assessing vulnerability and pinpointing areas where the most at-risk populations are exposed to extreme heat.

LBS was then paired with the Green Urban Scenarios (GUS) framework which explores the potential of urban greenery for cooling. For more information check out How VAIV uses GUS in Seoul. The combination of these two tools had provided VAIV with a data framework that not only models the effects of different tree canopy scenarios but also identifies vulnerable areas to heat risk that would benefit the most from a NBS adaptation strategy.

Image 2. GIF to show the cooling potential of canopy growth over a period of 10 years. Results were computed thanks to our partners LucidMinds AI using the Green Unified Scenarios (GUS) tool. What we learnt

A significant challenge to cities making data-enabled decisions, is the comprehensive collection and integration of relevant data — spanning environmental, physical, and crucially, social dynamics. Understanding community values ensures that digital twins reflect real-life scenarios, making their insights more actionable. Beyond gathering data, transforming these into meaningful insights and actions poses another layer of complexity. This includes harnessing sociocultural data to assess resilience and coping mechanisms, essential for risk management.

Following this, we will be exploring how more localised data inputs from citizen surveys, digital engagement, and AI-driven analysis of social media, will enhance our narrative on risk indicators as part of the heat-sensing project.

3. Driving Adaptation: Recommending Actions

Our aim at TreesAI is not to just analyse risks but to actively manage them by empowering citizens and city officials with the tools for tailored adaptation plans and effective crisis management.

While our data-driven risk assessment and digital twin modelling in Songpa underscored the potential for urban green infrastructure, we encountered a significant challenge: the scarcity of available land for large-scale nature-based solutions (NBS). The most heat-vulnerable areas are also the most densely populated, with limited space for expansive green projects. This paradox highlights a critical need for innovative, space-efficient strategies that integrate nature into the urban fabric without requiring large tracts of land.

To move from analysis to feasible NbS recommendations, during the Heat-sensing project we will be developing a framework to match heat adaptation actions to different urban landscapes. For example, in high-rise residential areas, community gardens and tree planting initiatives that knit communities closer together while providing thermal comfort may be key. Whereas in industrial zones, installing cool roofs and permeable pavements can curb heat accumulation and manage water runoff. Mixed-use centres gain from pocket parks and shaded walkways, while commercial zones can benefit from green roofs and tree-lined avenues.

It is also important to note that the impact of NbS often spreads well beyond its boundaries — for exempla rural buffer rings have been shown to reduce UHIE. Addressing the relationship between the resolution of heat risk assessments and the granularity of actionable measures is crucial. Higher resolution data allows for more precise risk assessments, but the corresponding actions may be implemented at a broader scale, influencing the scope and design of adaptation scenarios. This disparity necessitates careful consideration in scenario planning to ensure that strategies are both effective and feasible, reflecting the detailed insights gained from high-resolution assessments while accommodating the broader scale of implementation.

See the table below for initial analysis of which NbS to recommend in which context.

4. Where next: the Heat-Sensing Project

Local people and organizations who are most directly affected by — and often disproportionately vulnerable to — the impacts of climate change are often left out of critical decision-making processes to address them, such as the design of adaptation programs or plans…. Smaller, local organizations and communities get boxed out, unable to access the funding and other resources they need to recover and build resiliency to floods, droughts, heat waves and myriad other impacts of climate change.

We believe that climate change is a collective problem, with these tools requiring a people-centred approach that drives action. During the Heat-sensing project we will therefore be exploring two main avenues of research and development.

Localising a risk based vulnerability assessment (LBS) framework to focus on what matters most to people: we are exploring a data framework that streamlines various remote, weather, institutional, and citizen stories and sensing to assess the climate risks of heat island effects in cities and support the creation of community engagement strategies. Deploying Assistive AI and civic participatory tools to drive action: we will use generative AI to run different adaptation and mitigation scenarios from individual to institutional levels to assess the potential cooling effect of a broad spectrum of interventions — providing easy-to-understand personalised recommendations.

Much of the work will be about talking to stakeholders, so whether you are a city planner or a concerned citizen, please don’t hesitate to get in touch at treesai@darkmatterlabs.org

About the partners

At Dark Matter Labs, we view the interconnected crises of our time as symptoms of a deeper, structural miscoding of our economic systems. We understand these codes to be physical (e.g. biodiversity, energy, labour and materials), structural (e.g. money creation, embedded inequality and private property rights) and psychological (e.g. failure of the imagination).

Recognising the complex, entangled reality of living systems, we are exploring alternative pathways for organising society and stewarding the shared planetary commons. Our working hypothesis is that these pathways must be rooted in a radical reframing of our relationship to everything; from technology and money to land and the other-than-human world. We are framing this transformation as a shift towards Life-Ennobling Economies.

We’re a multidisciplinary team with a shared passion for applying innovative approaches to complex societal challenges. With expertise in disciplines ranging from accountancy, policy and law through to urban design and organisational culture, we work in fluid teams to ensure compound learning.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

VAIV Company (formerly Daumsoft) is a prominent company in South Korea specializing in artificial intelligence and big data, dedicated to collecting and analyzing large datasets to extract meaningful insights since 2000. The company has developed a range of AI-driven businesses, including extensive platforms that leverage its proprietary technology to address various industry needs. It’s key products include VAIV Search, VAIV Smartchat, VAIV Assistant, and Sometrend.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

The ISi Lab, part of the Graduate School of Information at Yonsei University, is a leading research group in the fields of smart cities and citizen-participatory living labs, under the guidance of Professor Jeonghoon Lee. The lab spearheads a variety of national R&D projects, focusing on innovative and practical solutions for the development and implementation of smart cities. One of its core approaches is the living lab model, which fosters the creation of citizen-centered solutions through active community participation.

Through its extensive experience gained from numerous R&D projects, ISi Lab tailors smart city strategies to meet the unique needs and characteristics of each city. The lab’s research efforts are geared towards addressing urban challenges and providing a roadmap for sustainable urban development.

Furthermore, ISi Lab conducts comprehensive smart city assessments across over 50 cities worldwide, regularly publishing the Smart City Index Report. This report evaluates the progress of smart city development in each city and provides customized strategies based on the results, offering technological and policy-driven solutions that align with the specific needs of different regions.

TreesAI’s Heat-sensing collaboration was originally published in Dark Matter Laboratories on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thales Group

Hera planetary defence mission successfully launched

Hera planetary defence mission successfully launched tas Tue, 10/08/2024 - 08:33   Hera aims to confirm if it is possible to deflect a hazardous asteroid on a collision course with the Earth, as a repeatable strategy ready for use in case of an actual asteroid threat Thales Alenia Space provided key technologies onboard the Hera spacecraft, which will send essential data
Hera planetary defence mission successfully launched tas Tue, 10/08/2024 - 08:33

 

Hera aims to confirm if it is possible to deflect a hazardous asteroid on a collision course with the Earth, as a repeatable strategy ready for use in case of an actual asteroid threat Thales Alenia Space provided key technologies onboard the Hera spacecraft, which will send essential data from the Didymos binary asteroid, similar to one that could potentially impact our Planet

Madrid, 8 October, 2024 – Hera, the European Space Agency’s (ESA) first planetary defence mission,  was successfully launched yesterday aboard a SpaceX Falcon 9 rocket, from Cape Canaveral in Florida. The satellite is now heading to a unique target among the 1.3 million known asteroids of our Solar System.

If an incoming asteroid were to threaten Earth, what could be done to cope with the situation? On September 26th 2022, NASA’s Double Asteroid Redirection Test (DART) mission performed humankind’s first test of asteroid deflection by crashing into the Great-Pyramid-sized Dimorphos moonlet. This resulted in a shift of its orbit around the mountain-sized Didymos main asteroid.

Hera networking with Cubesats ©ESA 

Next comes ESA’s own contribution to the international Asteroid Impact & Deflection Assessment (AIDA) collaboration: the Hera mission will travel to Dimorphos so as to gather vital close-up data regarding the deflected body and turn DART’s grand-scale kinetic impact experiment into a well-understood and potentially repeatable planetary defence technique. Hera will provide in particular accurate measurements concerning the asteroid’s mass, as well as crucial information about its make-up and structure, which are essential to interpret the outcome of the impact.

The Hera mission, will also carry out the most detailed exploration to date of a binary asteroid system – although binaries make up 15% of all known asteroids, they have never been studied in detail. Hera will also perform technology demonstration experiments, including the deployment of ESA’s first deep space ‘CubeSats’ – shoebox-sized spacecraft to venture closer than the main mission then eventually land – and an ambitious test of 'self-driving' for the main spacecraft, based on vision-based navigation. The OHB System AG (Germany), as prime contractor of Hera, led the industrial consortium, including responsibility for the overall spacecraft design, development, assembly, and testing.

Thales Alenia Space’s contribution: a teamwork between Spain, Italy and Belgium

Thales Alenia Space, a joint venture between Thales (67%) and Leonardo (33%), provided key technologies onboard Hera spacecraft. Thales Alenia Space in Spain was responsible for the communications subsystem, which allows to control and track the spacecraft from a distance up to 500 million kilometre away and to send all the information gathered by Hera back to Earth. Thales Alenia Space in Italy developed the state-of-art Deep Space Transponder, while Thales Alenia Space in Belgium assembled the Travelling Wave Tube Amplifiers (TWTA) built by Thales, and developed the Power Conditioning and Distribution Unit (PCDU), which provides power to the spacecraft during all its lifetime.

Safeguarding our planet

Asteroids are bodies originated in the young stars nebulae that never grew to planets, formed of rock and metal. Among them, those that have an orbit that brings them close to Earth (within 45 million kilometres), known as near-Earth asteroids, represent a risk of hitting the Earth. There are plenty of such bodies in our Solar system, from tiny little ones measuring a few meters (there are 40-50 millions of them) up to larger ones, measuring more than 1 km but much more scarce (there’s less than 1000 of them).

Neither the smaller near-Earth asteroids nor the larger ones represent a real threat to humanity. Small asteroids actually hit the Earth quite frequently (every two weeks) with no consequences. The larger ones, although potentially dangerous, are well known and tracked, and it takes millions of years to have one of them hitting the Earth. Actually, a 10km asteroid impact is the most accepted theory of the Cretaceous extinction around 66 million years ago, ending with three-quarters of the plant and animal species, among others the dinosaurs.

Hera scans DART's impact crater ©ESA 

The mid-sized class asteroids of more than 100 meters are the ones we need to worry about. There are about 30,000 near-Earth asteroids of the 100 to 300 meter size class, 82% of them still to be spotted, hitting the Earth every 10,000 years. The impact energy of such an asteroid is equivalent to around 50 megatons of TNT, the power of a “Tsar Bomba”. The effect of such an impact would be devastating if it reached a populated area, capable of destroying an entire city or to create a tsunami if it impacted a sea.

The Didymos binary asteroid system is prototypical in terms of size of the thousands of asteroids that pose a hazardous risk of impact to our planet. Around the Dydimos main body, 780 meter in diameter, orbits the 150 meter Dimorphos moonlet, which is the first body in the Solar System to have had its orbit measurably changed through human action, by the DART impact, and it is also the smallest asteroid yet visited by humankind.

The Hera spacecraft will reach the binary asteroid in October 2026, after a two-year cruise phase. The day Hera reaches Didymos, it will be 195 million km away from Earth.

ABOUT THALES ALENIA SPACE

Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental management, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping to build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the parent companies’ Space Alliance, which offers a complete range of services. Thales Alenia Space posted consolidated revenues of approximately €2.2 billion in 2023. Thales Alenia Space has around 8,600 employees in 9 countries, with 16 sites in Europe and a plant in the US.

www.thalesaleniaspace.com

THALES ALENIA SPACE – PRESS CONTACTS

Oriol Casas Thió
Tel.: +34 618 509 197
oriol.casasthio@thalesaleniaspace.com

Tarik Lahlou
Tel: +33 (0)6 87 95 89 56
tarik.lahlou@thalesaleniaspace.com

Catherine des Arcis
Tel: +33 (0)6 78 64 63 97
catherine.des-arcis@thalesaleniaspace.com

 

/sites/default/files/database/assets/images/2022-10/New_Banner.jpg 08 Oct 2024 Thales Alenia Space Space to secure and defend Type Press release Structure Space Hera aims to confirm if it is possible to deflect a hazardous asteroid on a collision course with the Earth, as a repeatable strategy ready for use in case of an actual asteroid threat Thales Alenia Space provided key technologies onboard the Hera spacecraft... Hide from search engines Off Don’t overwrite with Prezly data Off Canonical url https://www.thalesaleniaspace.com/en/press-releases/hera-planetary-defence-mission-successfully-launched

PingTalk

Why Siloed IAM Is a Burden on IT Resources and Security

Learn how Identity silos leave your organization vulnerable to wasted company resources, miscommunication, and serious security concerns.

Monday, 07. October 2024

liminal (was OWI)

Two Adjacent Markets Collide: Customer IAM and Master Data Management

The post Two Adjacent Markets Collide: Customer IAM and Master Data Management appeared first on Liminal.co.

SC Media - Identity and Access

Okta Classic customers told to check logs for sign-on bypass

Security pros say teams running Okta Classics should take immediate action, checking their logs for exploitation.

Security pros say teams running Okta Classics should take immediate action, checking their logs for exploitation.


Thales Group

Thales to bring its expertise in advanced air mobility to the North Dakota Unmanned Autonomous Systems Council

Thales to bring its expertise in advanced air mobility to the North Dakota Unmanned Autonomous Systems Council prezly Mon, 10/07/2024 - 18:49 Thales, the global high technology leader and integration partner of North Dakota’s system for advanced UAS operations, Vantis, will further collaboration with the UAS Council members and broader ecosystem to harmonize key state objectives.
Thales to bring its expertise in advanced air mobility to the North Dakota Unmanned Autonomous Systems Council prezly Mon, 10/07/2024 - 18:49 Thales, the global high technology leader and integration partner of North Dakota’s system for advanced UAS operations, Vantis, will further collaboration with the UAS Council members and broader ecosystem to harmonize key state objectives. North Dakota cements its leadership position as a global hub for autonomy and advanced air mobility.

Thales, a global technology leader in Aerospace, Defense, Digital Identity & Security, has joined the North Dakota Unmanned Autonomous Systems (ND UAS) Council, further bolstering North Dakota's leadership in the UAS industry.

As part of this partnership, Thales will bring decades of experience in aerospace solutions and specific knowledge on advanced air mobility to the Council. The Group will provide cutting-edge solutions in air traffic management, relying on its expertise in digital technologies including sensors, AI, connectivity and cybersecurity. Thales will significantly contribute to the Council's mission of advancing UAS technology in North Dakota, shaping supportive policies and fostering local workforce development. Additionally, Thales is the systems integration partner for Vantis, North Dakota’s system for beyond-visual-line-of-sight UAS operations. Vantis is administered by the Northern Plains UAS Test Site (NPUASTS).

“We couldn’t be happier to welcome Thales to the ND UAS Council,” said Matt Dunlevy, President of the ND UAS Council. “Their global expertise and innovative approach to aerospace technology will be extremely valuable as we work to keep North Dakota at the forefront of the UAS industry. Together, we will seek to redefine the possibilities of unmanned systems, ensuring that North Dakota remains a leader in this rapidly evolving field.”

The ND UAS Council is a leading organization dedicated to advancing UAS technology and operations in North Dakota. The Council brings together industry leaders, policymakers, and stakeholders to promote innovation, advocate for supportive policies, and drive workforce development in the UAS sector.

The addition of Thales to the ND UAS Council marks a significant milestone in the ongoing effort to establish North Dakota as a global hub for UAS innovation and development. The Council is committed to fostering collaboration among industry leaders to ensure the state continues to lead in this dynamic and rapidly growing industry.

“Joining the ND UAS Council is a strategic move for Thales as we continue to expand our footprint in the UAS domain,” said Frank Matus, Director of Digital Aviation Market Development for the Americas at Thales. “North Dakota is a key hub for UAS innovation, and we are eager to collaborate with the Council and its members to advance the industry and contribute to the development of next-generation unmanned systems.”

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

About Thales in the U.S.

In the United States, Thales has conducted significant research and development, manufacturing, and service capabilities for more than 130 years. ​ Today, Thales has 37 locations around the U.S., employing nearly 5,000 people. ​ Working closely with U.S. customers and local partners, Thales is able to meet the most complex requirements for every operating environment.

/sites/default/files/prezly/images/Generic%20banner%20option%204_1.png Documents [Prezly] Thales to bring its expertise in advanced air mobility to the North Dakota Unmanned Autonomous Systems Council.pdf Contacts Alice Pruvot, Head of Media Relations, Aeronautics & Defense 07 Oct 2024 Type Press release Structure Aerospace United States Thales, a global technology leader in Aerospace, Defense, Digital Identity & Security, has joined the North Dakota Unmanned Autonomous Systems (ND UAS) Council, further bolstering North Dakota's leadership in the UAS industry. prezly_695673_thumbnail.jpg Hide from search engines Off Prezly ID 695673 Prezly UUID 6163d693-e06b-43ed-a4f3-a4dcccd79b7a Prezly url https://thales-group.prezly.com/thales-to-bring-its-expertise-in-advanced-air-mobility-to-the-north-dakota-unmanned-autonomous-systems-council Mon, 10/07/2024 - 20:49 Don’t overwrite with Prezly data Off

Microsoft Entra (Azure AD) Blog

Join us at the Microsoft Entra Suite Showcase!

This fall, we are bringing the Microsoft Entra Suite Showcase to cities worldwide. Join us to explore how our latest advancements in secure identity and access management can help safeguard your organization's digital assets.   Announced earlier this year, the Microsoft Entra Suite unifies identity and network access security—a novel and necessary approach for Zero Trust security. It prov

This fall, we are bringing the Microsoft Entra Suite Showcase to cities worldwide. Join us to explore how our latest advancements in secure identity and access management can help safeguard your organization's digital assets.

 

Announced earlier this year, the Microsoft Entra Suite unifies identity and network access security—a novel and necessary approach for Zero Trust security. It provides everything you need to verify users, prevent overprivileged permissions, improve detections, and enforce granular access controls for all users and resources.

 

 

Register now to join us for a half-day event in the following locations:

 

September 23

Mexico City, Mexico

Registration Full

September 25 

São Paulo, Brazil 

Registration Full 

September 30 

Amsterdam, Netherlands 

Registration Full

October 1 

London, England 

Registration Full

October 8 

Dallas, TX, USA 

Registration Full

October 8 

Johannesburg, South Africa 

Registration Full

October 9 

Sydney, Australia 

Registration Full

October 10 

Atlanta, GA, USA 

Registration Full

October 14 

Berlin, Germany 

Register Here 

October 16 

Singapore, Singapore

Register Here 

October 21 

Silicon Valley, CA, USA 

Register Here 

November 6 

Dubai, UAE 

Register Here 

November 12 

Mumbai, India 

Register Here

November 13 

Paris, France 

Register Here

November 14 

Bangalore, India 

Register Here

December 4 

New York, NY, USA 

Register Here 

December 10 

Chicago, IL, USA 

Register Here 

December 10

Toronto, Canada

Register Here

 

To learn more about Microsoft Entra Suite: 

Read the announcement on the Microsoft Security blog Watch the Zero Trust Spotlight on demand

 

We look forward to seeing you there!


KuppingerCole

Nevis Identity Suite and Authentication Cloud

by Alejandro Leal This KuppingerCole Executive View report examines the Nevis Identity Suite and Authentication Cloud solution. Focusing on critical sectors like government, healthcare, banking, insurance, and iGaming, Nevis Security is dedicated to managing and securing customer identities efficiently while enhancing the overall customer experience. This highlights the crucial role of Customer Id

by Alejandro Leal

This KuppingerCole Executive View report examines the Nevis Identity Suite and Authentication Cloud solution. Focusing on critical sectors like government, healthcare, banking, insurance, and iGaming, Nevis Security is dedicated to managing and securing customer identities efficiently while enhancing the overall customer experience. This highlights the crucial role of Customer Identity and Access Management (CIAM) solutions in today’s digital landscape, where secure identity management is not just a necessity but also a strategic advantage.

Adapting to Europe’s Latest Cybersecurity Laws: A Legal Perspective

by Stefan Hessel In today's rapidly evolving digital landscape, staying ahead of cybersecurity threats is more critical than ever. As organizations grapple with increasing regulatory demands, understanding the implications of new legislation becomes essential. At cyberevolution 2024, Stefan Hessel, Attorney-at-Law at reuschlaw, will delve into the intricacies of the EU's latest cybersecurity regu

by Stefan Hessel

In today's rapidly evolving digital landscape, staying ahead of cybersecurity threats is more critical than ever. As organizations grapple with increasing regulatory demands, understanding the implications of new legislation becomes essential. At cyberevolution 2024, Stefan Hessel, Attorney-at-Law at reuschlaw, will delve into the intricacies of the EU's latest cybersecurity regulations, specifically the NIS 2 Directive and the Cyber Resilience Act.

These regulations represent a significant shift in how companies must approach cybersecurity, focusing on both operational and product-related security. Stefan's talk will provide a comprehensive overview of these new obligations, highlighting the key challenges and liability risks companies face and offering practical insights on compliance.

For professionals navigating the complex terrain of cybersecurity law, this session is a must-attend. Gain valuable insights into how these regulations will impact your organization and learn best practices for integrating these legal requirements into your cybersecurity strategy. Don't miss this opportunity to prepare your business for the future of cybersecurity regulation.


Datarella

A Satellite-Enabled Mesh Network

This is the second article in a series of technical posts about how Track & Trust works at a component level. Building on your understanding of our mesh network technology, […] The post A Satellite-Enabled Mesh Network appeared first on DATARELLA.

This is the second article in a series of technical posts about how Track & Trust works at a component level. Building on your understanding of our mesh network technology, this post asks the question – what if everything starts going wrong? The answer as you will see lies in our Satellite-enabled Mesh Network nodes. Quick navigation links to the follow-up articles will be provided at the bottom of each article once the series is complete. For now, let’s jump in.

The mesh network technology alone can’t get the job done. To make it work, we want the Track & Trust logistics tracking and communication to function even in the most challenging circumstances. However, addressing a few more challenges is necessary to achieve this. Additionally, we need to consider the limitations of the technology.

The Challenges

What if 4G doesn’t work at all? Remote logistics operations or external circumstances like war or political instability can cause this issue. In such cases, tracking becomes a challenge, especially when the electrical power is unreliable. Consequently, we need to find alternative solutions.

We spent quite some time solving these issues. Fortunately, we did indeed find a way. With funding from the European Space Agency, we built a Satellite-enabled Mesh Network. Some mesh nodes now have more super powers, enabling them to do everything they could before, and more. Moreover, these special satellite-enabled mesh nodes are more expensive, but they have a special trick up their sleeves. They can post data to our servers even without 4G internet. Here’s how it works.

Satellite-Enabled Mesh Nodes

The yellow box in the picture represents a satellite enabled mesh node. By adding an Iridium short burst data (SBD) transmitter to it, we gave it superpowers. Connecting this node to a mesh node via waterproof cables turns it into a super node. Our partners at OroraTech built this part of the system. Data arrives at this node using peer to peer communication over wifi-direct. The Iridium short burst data (SBD) Transmitter consumes information from cellmesh, finding out what data didn’t get posted to the internet via 4g.

To make this work, a clear view of the sky is necessary. That’s why we waterproofed the enclosures. Strategic placement of these nodes is crucial. A good position is one where many other nodes will pass by, and it should also have a relatively solid electrical power supply. In our pilot, the roof of a local school with a solar power installation already in place proved to be an ideal location. Furthermore, this location allowed us to test the system in a real-world setting.

Breaking down each data package into smaller packages is essential. These packages must be small enough to send over Iridium short burst data. As a result, each user action results in 8-9 individual satellite messages. These messages are encoded and transmitted individually, error checked, and then recombined. Finally, they can post to our backend systems.

Here’s what it looks like in real life – a shot from our labs. In addition to the technical details, we will also cover the practical applications of the system.

What about electrical outages?

We anticipated that the nodes would need to be robust. Ensuring they have enough on-board power was crucial. This compensates for electrical outages, as delivery trucks can only supply 12v power when the ignition is on.

Fortunately, our hardware engineer friends at Weaver Labs provided a solid solution to this issue inside these boxes. They equipped the nodes with integrated backup power systems and implemented a battery management system and software. This helps the nodes recover from power outages and situations where the on-board battery is fully depleted. Moreover, this ensures that the system remains operational even in challenging conditions.

The result is a combination of hardware and software, enabled by mesh network technology, 4g, and satellite communication. This combination allows the system to stand up to difficult conditions and still work. In conclusion, our system is designed to provide reliable tracking and communication in even the most challenging environments.

In our next post, I’ll address how the system handles security, covering the authentication and blockchain details backed into Track & Trust.

<<Previous Post

Next Post>>

The post A Satellite-Enabled Mesh Network appeared first on DATARELLA.


Metadium

2024 Q3 Activity report

Dear Community, In the third quarter of 2024, Metadium continued to make significant progress, a journey we couldn’t have embarked on without your unwavering support. We want to thank everyone who has been with us throughout 2024, and we are pleased to report and summarize Metadium’s key achievements and developments. Summary The third quarter of 2024 saw a total of 1,794,677 transactions and

Dear Community,

In the third quarter of 2024, Metadium continued to make significant progress, a journey we couldn’t have embarked on without your unwavering support. We want to thank everyone who has been with us throughout 2024, and we are pleased to report and summarize Metadium’s key achievements and developments.

Summary

The third quarter of 2024 saw a total of 1,794,677 transactions and 33,051 DID wallets created. The Explorer website was updated for more efficient data usage. POSTECH (Pohang University of Science and Technology) has adopted a blockchain-based smart student ID powered by Metadium’s mainnet. CertiK Skynet security audit and KYC certification were successfully completed.

Technology

Q3 Monthly Transactions

During the third quarter of 2024, there were a total of 1,794,677 transactions and 33,051 DID wallets (as of September 30).

Metadium Explorer Upgrade

The Metadium Explorer website has been updated to reduce server load and display data more efficiently.

For more details, check here.

POSTECH Adopts Blockchain-Based Smart Student ID

POSTECH has adopted a blockchain-based smart student ID powered by Metadium’s mainnet. This is a significant milestone demonstrating the excellence and stability of Metadium’s technology.

Read more

CertiK Skynet Security Audit and KYC Certification Completed

Metadium has successfully completed a security audit and KYC certification through CertiK Skynet. This is part of our continued commitment to enhancing security and transparency, which we consider the top priorities for the project’s ongoing development.

The key results of this audit and certification include:

An increase of 5.88 points in the CertiK Security Score A rise of 513 ranks in the Security Score Ranking Obtaining the KYC certification badge

For more details, check here.

We are committed to consistently providing transparent updates and the latest information while striving to build an innovative and sustainable blockchain ecosystem. Moving forward, we will continue to strengthen research, development, and collaboration to realize the true value of decentralization together with all of you. We sincerely thank everyone who is part of the Metadium community and will continue to work hard to earn your trust and support.

안녕하세요, 메타디움 팀입니다.

2024년 3분기에도 메타디움은 중요한 발전을 이어갔습니다. 2024년 메타디움과 함께 해주신 여러분께 감사드리며, 지난 상반기에 이은 메타디움의 주요 성과와 변화를 돌아보고 요약하여 보고합니다.

요약

2024년 7월부터 9월 간 총 1,794,677 건의 트랜잭션과 33,051 개의 DID 월렛이 생성되었습니다. 더 효율적인 데이터 사용을 위해 익스플로러 웹사이트가 업데이트 되었습니다. 포항공과대학(포스텍)이 메타디움의 메인넷을 기반으로 한 블록체인 스마트 학생증을 채택했습니다 CertiK Skynet에서 보안 감사 및 KYC 인증을 성공적으로 완료하였습니다.

기술 업데이트

Q2 월간 트랜잭션

2024년 7월부터 9월 간 총 1,794,677 건의 트랜잭션과 33,051 개의 DID 월렛이 생성되었습니다.

메타디움 익스플로러 업데이트 진행

메타디움 익스플로러 웹사이트에 업데이트가 진행되었습니다. 사이트 부하를 줄이고 더욱 효율적으로 데이터를 표시할 수 있도록 진행되었습니다.

자세한 내용은 여기를 확인해보세요.

포항공과대학(포스텍), 블록체인 스마트 학생증 채택

포항공과대학(포스텍)이 메타디움의 메인넷을 기반으로 한 블록체인 스마트 학생증을 채택했습니다. 이는 메타디움 기술의 우수성과 안정성을 입증하는 중요한 성과입니다.

자세한 내용은 여기를 확인해보세요.

CertiK Skynet 보안 감사 및 KYC 인증 완료

메타디움은 CertiK Skynet에서 보안 감사 및 KYC 인증을 성공적으로 완료하였습니다.
이는 프로젝트의 지속적인 발전과 신뢰를 위해 보안과 투명성 강화를 최우선 과제로 삼고 있는 것의 일환입니다.

이번 보안 감사 및 인증의 성과는 아래와 같습니다.

CertiK Security Score 5.88점 상승 Security Score Rank 513계단 상승 KYC 인증 배지 획득

자세한 내용은 여기를 확인해보세요.

메타디움 팀은 투명한 업데이트와 최신 정보를 제공하기 위해 꾸준히 노력하고 있으며, 혁신적이고 지속 가능한 블록체인 생태계를 만들어가는 데 전념하고 있습니다. 앞으로도 연구 개발과 협업을 강화해 나가며, 여러분과 함께 진정한 탈중앙화의 가치를 실현해 나가겠습니다. 메타디움과 함께하는 모든 분들께 진심으로 감사드리며, 계속해서 여러분의 신뢰와 지지를 얻을 수 있도록 최선을 다하겠습니다.

감사합니다.

메타디움 팀 Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(EN) | http://t.me/metadiumofficial Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

2024 Q3 Activity report was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Mastering NIS2 and DORA Compliance

by Mike Small Organizations and society have become dependent upon digital services which has increased the impact of cyber threats on businesses and the general public.  Recent incidents demonstrate how ransomware attacks and even mistakes can disrupt public services including healthcare. You need to prepare for and implement these regulations effectively, not only to meet your obligations,

by Mike Small

Organizations and society have become dependent upon digital services which has increased the impact of cyber threats on businesses and the general public.  Recent incidents demonstrate how ransomware attacks and even mistakes can disrupt public services including healthcare. You need to prepare for and implement these regulations effectively, not only to meet your obligations, but also to protect your organization.

NIS2 and DORA scope and regulatory approach

The EU NIS2 (Network and Information Security Directive 2) and the EU DORA (Digital Operational Resilience Act) are two regulatory frameworks designed to enhance the cybersecurity and operational resilience of organizations within the European Union. Although they have overlapping goals in promoting robust cybersecurity practices, they differ in their scope, objectives, and target sectors.

NIS2 has a broader scope and applies to multiple sectors beyond finance, including healthcare, transport, energy, digital infrastructure, and public administration. It aims to enhance cybersecurity across a diverse range of critical sectors.  On the other hand, DORA focusses exclusively on the financial industry. It addresses how financial entities, and their Information and Communication Technology (ICT) providers should handle operational resilience and cybersecurity risks.

NIS2 takes a principles-based approach, offering guidelines that can be adapted by different sectors. It establishes minimum requirements and allows individual member states to implement additional rules if needed.  DORA, however, takes a more prescriptive approach, with detailed rules and requirements for financial entities. It defines clear guidelines for risk management, incident reporting, testing, and monitoring of third-party ICT providers.

Third-Party Risk Management

Both emphasize the importance of organizations managing risks in their cyber supply chains. DORA places a strong emphasis on third-party risk management within the financial sector, introducing stricter oversight of ICT service providers. It requires financial institutions to monitor and manage risks arising from third-party ICT dependencies, with critical ICT providers being subject to direct regulatory oversight. On the other hand, NIS2 encourages managing supply chain risks but does not impose as strict controls over third-party service providers as DORA.

NIS2 and the British Library Hack

In October 2023, the British Library was the victim of a cyber-attack which copied and exfiltrated some 600GB of files, including personal data of library users and staff. As well as the exfiltration of data for ransom, the attackers’ methods included the encryption of data and systems, and the destruction of some servers to inhibit system recovery and to cover their tracks.

It is interesting to consider how this incident could have been prevented or its impact significantly reduced by compliance with the NIS2 Directive.  Here are some suggestions based on comparing the reported results of the investigation into the incident with the requirements of NIS2.

Implementation of Multi-Factor Authentication (MFA). The lack of MFA on certain systems allowed unauthorized access, which was a significant factor in the attack's success. NIS2 emphasizes the implementation of appropriate technical and organizational measures, including MFA, to protect network and information systems. Had MFA been in place across all remote and privileged access points, it would have added a critical layer of security and could have prevented the attackers from gaining access as easily.

Improved Network Segmentation and Monitoring. The library's legacy network infrastructure allowed attackers broader access once they breached the system. NIS2 mandates that organizations employ risk management practices, including network segmentation and continuous monitoring, to limit the spread of potential breaches. Implementing a modern, segmented network design as required by NIS2 could have restricted the attackers’ movement, minimizing the impact.

Regular Security Assessments and Testing.  Although the Library had conducted some security assessments, the NIS2 Directive encourages regular and more comprehensive security testing, such as penetration testing and vulnerability assessments. These assessments could have highlighted weaknesses, particularly in legacy systems, and prompted remedial actions that might have prevented the attack.

Your Weakness is their Opportunity

The UK telecoms and services provider BT logs 2,000 signals of potential cyber-attacks every second; 200 million per day.  According to IBM the average cost of a data breach in 2023 was $4.88 million.  You need to act now to protect your organization’s cyber infrastructure against cyber-attacks and to prepare to respond to when an incident occurs.  Your weakness is the threat actors’ opportunity.

The NIS2 directive and the DORA regulation set out the basic rules you need to follow.  Act now to build a clear view of the risks that your organization faces, identify the gaps in your controls and implement best practices for cyber hygiene.  Check out your cyber maturity.  To get more details on how to navigate the final stretch attend cyberevolution 2024.

cyberevolution 2024

We are excited to invite you to our cyberevolution event in Frankfurt on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cyber resilience will be one of the big topics on the agenda. In a combined session, Mike Small will discuss “Why you need data backup and how AI can help” and Joshua Hunter will provide insights into “Focus on Cyber Resilience - Prepare, Respond, Resume”. We look forward to seeing you there and have some great discussions.

Sunday, 06. October 2024

KuppingerCole

Building a Stronger Cyber Community: Inside KuppingerCole Membership

Matthias discusses the new KuppingerCole Membership program with Vanessa Schweihofer and Alexei Balaganski. They explore the various benefits of the Membership, including access to research, networking opportunities, and personalized insights through inquiry calls and workshops. The conversation highlights the technological advancements being integrated into the Membership, such as AI capabilit

Matthias discusses the new KuppingerCole Membership program with Vanessa Schweihofer and Alexei Balaganski. They explore the various benefits of the Membership, including access to research, networking opportunities, and personalized insights through inquiry calls and workshops.

The conversation highlights the technological advancements being integrated into the Membership, such as AI capabilities and a passwordless registration process. The importance of community building and continuous improvement in cybersecurity and identity management is emphasized, along with the advantages of Corporate Membership for teams.



Friday, 04. October 2024

auth0

Why Developers Should Attend Oktane Online

Build the future of identity: join developers around the world at Oktane Online 2024
Build the future of identity: join developers around the world at Oktane Online 2024

Datarella

Mesh Network Technology Demystified

This is the first in a series of technical posts about how Track & Trust works at a component level. To start, we’ll outline how our mesh network technology works […] The post Mesh Network Technology Demystified appeared first on DATARELLA.

This is the first in a series of technical posts about how Track & Trust works at a component level. To start, we’ll outline how our mesh network technology works in this post. Additionally, I’ll provide quick navigation links to the follow-up articles at the bottom of each article once the series is complete. For now, let’s jump in. 

In the photo above we’re showcasing part of our fleet of mesh nodes. As you may recall, you may have seen them before in our recent post announcing that we passed our site acceptance tests with the European Space Agency. Our mesh nodes contain a lot inside. So, what kind of communications superpowers do they possess?

What’s mesh network technology ?

Mesh network technology can be a bit confusing but we’ll demystify the jargon. These black boxes are “mesh nodes,” which in our case means they can communicate with one another using wifi-direct. We also use this protocol to send data from Android phones directly to the nodes without a need for any additional gateways nor internet connectivity.

Why wifi-direct?

We chose wifi-direct because it’s really fast. In fact, nodes can detect one another and negotiate a wireless data connection even at highway speeds. Furthermore, they can do this with only a brief moment of communication.

We serve our application directly from the mesh node to a logistics employee’s phone using wifi-direct. This is useful because we don’t need an internet connection. Afterall we designed the system to cope with the worst conditions imaginable – and lack of network connectivity is where that starts.

Layer 1: Cellmesh Layer

Our partners at Weaver Labs contributed their cellmesh software. Cellmesh controls automatic detection, negotiation, and handling of communication between nodes. Additionally, it continually searches for communication resources like 4G or satcom and routes data to our servers which ensures continuity of operations in adverse conditions.

Layer 2: Mesh Node Layer

Datarella built a software layer on top of the lower-level networking technology from Weaver Labs. Consequently, we call this layer the Mesh Node layer. It has several big jobs:

Manage data piped into cellmesh from user interactions Manage incoming data from cellmesh originating from other nodes Maintain efficient data replication between nodes Manage deletion of data already posted to the backend

In addition, the Mesh Node layer prevents our data pipelines from growing too large with redundant data. Together, the cellmesh and Mesh Node layers enable individual mesh nodes to connect with one another seamlessly. These nodes automatically  authenticate their identities cryptographically and freely pass authenticated data back and forth. Nodes can also post this data directly to our backend servers.

In the image, the blue boxes represent nodes. The boxes provide service continuity for users in the field who want to post information like deliveries, pickups, damage, delays, and more.  Sometimes they’re connected to one another but not always.  Every box has two 4G cellular antennas and continuously searches for an exit to our Track & Trust cloud.

Patient and Resilient Mesh Network Technology

Our boxes are patient. If they’re offline, they wait until they’re online to post data. Alternatively, if they can’t do that, they wait for another node to come along. When connected, they play a game of telephone. For instance, if one node receives a message from another, it queues that message to pass along when it meets another box.

As we add more nodes to the system, it becomes more resilient. Meanwhile, our cellmesh and mesh node services mean that rollout is zero-configuration. Therefore, we simply plug the nodes into the trucks, and they start communicating with one another and the internet. They serve up mobile interfaces for drivers and warehouse workers to provide the most up-to-date information about what’s happening in the field.

Whenever a node connects with the cloud, it shares everything it knows. Moreover, it tells its “colleagues” which messages it successfully passed on. As a result, this allows them to forget information that they know a “colleague” node has already posted. This is technically known as a “gossip protocol” and it’s at the heart of how our Mesh Network Technology manages the information lifecycle.

In the next post, we’ll explore what happens if 4G isn’t working for some reason. There, we’ll discuss how satellite communications come into play as well as the critical role that GNSS plays.

Next Post >>

The post Mesh Network Technology Demystified appeared first on DATARELLA.


IDnow

Banks urged to improve anti-fraud controls as APP scam compensation becomes compulsory.

Will the new rules be enough to tackle APP fraud, or will it simply cause more problems? Now, more than ever, banks need to innovate and educate to keep one step ahead of the fraudsters. In a monumental move that has been applauded by consumer protection groups, the Payment Services Regulator (PSR) has mandated that […]
Will the new rules be enough to tackle APP fraud, or will it simply cause more problems? Now, more than ever, banks need to innovate and educate to keep one step ahead of the fraudsters.

In a monumental move that has been applauded by consumer protection groups, the Payment Services Regulator (PSR) has mandated that from October 7, victims of Authorized Push Payment (APP) scams in the UK must be refunded (up to £85,000) by the account holder’s bank within a maximum of five days. 

This amount of £85,000 is significantly lower than the originally proposed £415,000, which was initially met with reservations by the UK Payments Association that it would “threaten the viability of smaller payment companies” . Other concerns were expressed by trade association, UK Finance, which feared the move would encourage more ‘complicit fraud’ and incentivize fraudsters to claim compensation money.  

However, for the customer, especially those who have fallen victim to APP scams, the move could not have happened sooner. 

This is positive news for UK victims of the APP scam, who for too long have often had to deal with the additional stress of attempting to have their money refunded.

Grigory Yusupov, Regional Director of UK at IDnow.

“This landmark decision should act as a reminder of the responsibility that banks have to their customers and a wake-up call to ensure their fraud-prevention and identity verification tool stack is fit for purpose, especially for social engineering-type attacks.”

It is hoped that the move will go some way to reducing the amount lost to APP scams, which in 2023 was estimated at a whopping £459.7 million.

A positive step forward – for the customer at least.

Before the recent ruling, there was no guarantee that victims of APP scams would be refunded, or who they should even turn to for help. For the unlucky few whose bank refused to refund, their last call was often to the Financial Ombudsman Service (FOS).  

From April to July 2024 there were 8, 734 complaints about fraud and scams, half of which were regarding APP scams. This was a significant increase on the same period of 2023 (6,094). 

The year on year rise in APP scam complaints can be attributed to; 

Increase of ‘multi-stage fraud,’ which sees funds pass through numerous banks, resulting in consumers submitting multiple claims.   A growth in people inadvertently using their credit or debit cards to pay fraudsters, which are not covered by the Contingent Reimbursement Mode code or the new PSR rules.  More online fraud cases submitted by professional representatives, including claims management companies. Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now What is an APP scam?

While there are many different forms of APP scams, the end goal is always the same: to deceive individuals or businesses into sending money by fraudulent means. One of the most common forms of APP fraud is a romance scam. Read the story about how one British woman made it her mission to expose the romance scammers and even wrote a book about how she did it.

Other forms of APP fraud include: 

Purchase scams, where victims use a fake website or link in an attempt to purchase goods or services. 

Impersonation scams, where criminals pose as a well-known company or brand, such as a delivery firm, retailer or even HM Revenue & Customs and claim they have a parcel or bill that needs to be settled, for example. 

Investment scams occur when victims are duped into sending funds to a fraudster posing as someone with a ‘too good to be true investment.’ 

Read the story about how one man was contacted on Linkedin with ‘the offer of a lifetime,’ and proceeded to lose almost a million dollars.  ‘The rise of social media fraud: How one man almost lost it all.’ 

There are many others, including loan fee scams and lost pet scams. In fact, there are new APP scams created regularly, which is one of the reasons why APP scams are so dangerous and why the FOS is kept so busy. 

“Fraudsters’ methods are always evolving, and we continue to see that reflected in the complaints brought to our service,” said Pat Hurley, Ombudsman Director for Banking. 

“We are currently receiving – and resolving – around 500 fraud and scam complaints a week. In all the cases we receive, we’ll look at the individual circumstances and investigate whether a business did everything it was required to do. When we do uphold complaints, we expect firms to learn from our findings and apply them to any future interactions with their customers.”

What are banks currently doing to tackle APP fraud?

For banks, preventing APP fraud can often feel like a frustrating game of ‘Whac-A-Mole’; where after finally addressing and educating their customers on one form of APP fraud attack, a new variant suddenly springs up. This is why many banks employ AI to monitor transaction behaviors and cross reference historical spending patterns to flag potentially fraudulent activity. However, relying solely on AI can sometimes result in false positives or missed cases of social engineering. 

In general, banks tend to rely on specific risk signals, which are used to ascertain if a) the transaction is valid and authorized by a trusted account holder or b) the flagged activity is a genuine risk. However, as APP scam payments are authorized by the account holder, a regular bank’s defense systems do not necessarily flag such transactions as anomalies.  

From October 7, banks will be able to pause transactions for up to 72 hours when there are “reasonable grounds to suspect a payment is fraudulent.” Previously, banks had to either process or refuse a payment by the close of the next business day.

UK Fraud Awareness Report Learn more about the British public’s awareness of fraud and their attitudes toward fraud-prevention technology. Read now Can customers do anything to protect themselves against APP scams?

Of course, new regulations like October 7’s ruling may go some way to lessening the impact of APP fraud, by forcing banks to be held responsible, but as APP fraud appeals to human naivety, it can be particularly difficult to protect against. However, there are steps customers can take to protect themselves:

Too good to be true? If the offer or investment sounds too good to be true or is too cheap to be true, then there’s a good chance it probably is.
Is that really you? Is the bank, delivery company or even Keanue Reeves really contacting you? Fraudsters will impersonate anyone. If you’re unsure that the real company is contacting you, get in touch via official channels to check!
I beg your pardon? Fraudsters know that it’s often only a matter of time before people cotton on to the deception, which is why they tend to pressurize people to authorize payments as soon as possible.
How do you want to pay? If a company or person you have dealt with before is asking you to transact via a different payment method, then this is a red flag. 
Why do they want that? Do not give passwords or addresses or any other Personal Identifiable Information out over the web. Why are APP scams so common?

The main reason why APP scams have become so commonplace and pose such a threat to the public isn’t because they are particularly sophisticated – after all, APP scams are essentially just social engineering. According to Lovro Persen, Director of Document and Fraud at IDnow, it is the sheer scale of attacks that people are subjected to that makes APP scams so dangerous. 

It’s a numbers game. We know that if fraudsters send 10 APP scam messages, they are unlikely to catch anyone, but if they send 10,000 messages, statistically there will be people who bite.

Lovro Persen, Director of Document and Fraud at IDnow

“More must be done to raise awareness of the dangers of APP scams, especially with the more vulnerable groups like the elderly or the desperate.On paper, the decision to make compensation compulsory is positive, however, I do worry that banks may need to raise interest rates on loans and other services to make up for the loss of profit. In this regard, the financial compensation will affect the bank’s bottom line even more than it currently does.”

So, will compulsory compensation stop APP fraud?

Clearly, the fight against APP fraud is one that will not simply end on October 7. In fact, according to UK Finance, it may even make it worse: 

“We continue to express the opinion that the PSR’s approach may encourage more complicit fraud and exacerbate the APP risk as fraudsters capitalize on a reimbursement model which requires minimal consumer evidence, nor demonstration of consumer caution and a limited opportunity for payment service providers to investigate and challenge the consumer claim. This will inevitably increase the attractiveness of the UK to criminal entities.” 

The battle against APP fraud will not be won overnight. It will require regular regulatory updates to protect the industry. It will need a commitment from banks to educate their customers on the dangers of different forms of APP fraud. It will also require banks to innovate with new methods of fraud detection to not only protect their business bottom line and their customers but also ensure the customer journey is not too dramatically impacted.

How video identity verification can add an additional layer of protection against APP fraud.

While there will always be fraudsters on the lookout for new and inventive ways to deceive unsuspecting members of the public, banks have a responsibility to make it as difficult as possible for them. 

As APP fraud relies so heavily on real-time social engineering, a hybrid approach that combines automated detection with real-time human verification can provide an extra layer of defense and help to spot potential coercion. For example, when certain high-risk triggers are detected, such as unusually frequent payments to a new payee or a large, unexpected transaction, a live video verification session can be initiated to verify the transaction before it proceeds. 

This allows the agent to ask specific questions to verify the legitimacy of the transaction and identify potential fraud tactics, such as: 

“Can you explain the purpose of this payment?” 

“Have you been in contact with this payee before?”

“Is there any urgency pushing you to authorise this payment?”

VideoIdent Flex is designed to detect fraud at critical touchpoints, offering seamless real-time video verification for high-risk transactions. This solution is not only effective for onboarding and authentication but also plays a vital role in preventing APP fraud at the point of transaction. By combining AI risk scoring and live agent verification, banks can ensure that suspicious transactions are flagged and reviewed before being authorized.

Interested in more information about VideoIdent Flex? Check out our recent blog, ‘How video identity verification can help British businesses finally face up to fraud.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Veridium

Join Veridium at it-sa 2024

Join Veridium at it-sa 2024 for Live Demos and the Latest in Identity Assurance! Date: October 22-24, 2024Location: Nürnberg, GermanyBooth: Hall 9 Booth 177 Are you planning to attend this year’s it-sa Expo&Congress in Nürnberg from October 22nd to 24th? We’re excited to announce that Veridium will once again be participating, this time in partnership […]

Join Veridium at it-sa 2024 for Live Demos and the Latest in Identity Assurance!

Date: October 22-24, 2024
Location: Nürnberg, Germany
Booth: Hall 9 Booth 177

Are you planning to attend this year’s it-sa Expo&Congress in Nürnberg from October 22nd to 24th? We’re excited to announce that Veridium will once again be participating, this time in partnership with IGEL Technologies at Stand 9-117 in Hall 9.

Presenting the Next Evolution in Identity Security

This year, we’re thrilled to introduce the latest version of the Veridium Identity Assurance Platform. This upgrade brings unparalleled advancements in analytics-focused identity security, delivering an elevated level of intelligence and protection to safeguard today’s rapidly evolving digital landscapes. Our team will demonstrate how our platform enables quick, seamless authentication across various work environments, including robust integrations with stationary IGEL OS systems.

A Unified Approach to Thin and Zero Client Authentication

Our collaboration with IGEL Technologies underscores our shared commitment to simplifying security for thin and zero clients. With Veridium’s comprehensive authenticator options, users can experience the widest range of authentication methods on IGEL OS, from biometrics to secure tokens, all while benefiting from the fastest, simplest login experience available.

Schedule a Meeting with Us!

We’d be delighted to connect with you during the expo. If you’d like to arrange a meeting, feel free to reach out directly to Rainer Witzgall on LinkedIn or respond to this announcement. Join us to see firsthand how Veridium and IGEL are setting new standards in effortless, powerful identity security!

This version provides a welcoming introduction, highlights Veridium’s innovations, and underscores the partnership with IGEL, focusing on ease of authentication and versatility across environments.

  Don’t have a ticket yet?

No worries! Book your ticket now using the registration code 545285itsa24 and receive a free daily pass.

   

About Veridium:Veridium is revolutionising user identity security. By reliably verifying user identities and devices, our full spectrum Authentication Platform—equipped with AI-driven identity threat protection and continuous authentication capabilities—resolves a foundational security challenge that is critical to the effectiveness of nearly all other security controls related to user identity and access management: ensuring accurate user authentication, from start to finish. Our Authentication Platform seamlessly integrates with existing Identity/SSO providers to enhance authentication strength and with ZTNA, MDM, and EDR solutions. It boasts the widest range of authenticators on the market, including passwordless, phishing-resistant options, FIDO tokens, and patent-protected biometrics (contactless fingerprints, face, and behavioural), and can enhance the security of traditional MFA solutions to improve the security posture of organisations regardless of where they sit along their IAM maturity journey.

Thursday, 03. October 2024

auth0

Continuous Session Protection Now Available for Enterprise Customers

Continuous Session Protection empowers customers to meet a wide range of session and refresh token security needs, including custom lifetimes and revocation capabilities. Let’s dive deeper into how these features can enhance your security strategy.
Continuous Session Protection empowers customers to meet a wide range of session and refresh token security needs, including custom lifetimes and revocation capabilities. Let’s dive deeper into how these features can enhance your security strategy.

Northern Block

The Global Acceptance Network (GAN) (with Darrell O’Donnell)

Explore the challenges of designing decentralized ecosystems with expert Antti Kettunen. Learn strategies for balancing incentives and creating sustainable value. The post The Global Acceptance Network (GAN) (with Darrell O’Donnell) appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post <strong>The Global Acceptance Network (GAN)</strong> (with Darre

🎥 Watch this Episode on YouTube 🎥
🎧   Listen to this Episode On Spotify   🎧
🎧   Listen to this Episode On Apple Podcasts   🎧

About Podcast Episode

What if there was a way to establish a new trust layer for the internet, enabling secure digital interactions and unlocking valuable transactions that are currently impossible?

In this episode of The SSI Orbit Podcast, host Mathieu Glaude sits down with Darrell O’Donnell, Executive Director of the Global Acceptance Network (GAN), to explore this ambitious vision. The GAN aims to create a neutral, non-profit organization that governs digital public infrastructure for exchanging trusted information.

Drawing inspiration from payment networks like Visa, GAN seeks to standardize conversations around digital credentials and identity across different ecosystems and industries globally.

Some of the valuable topics discussed include:

The problems GAN is trying to solve around digital trust How GAN compares to and complements existing digital identity efforts GAN’s governance structure and business model The role of trust registries in enabling new types of digital interactions

Tune in to learn how GAN could transform digital trust and enable new forms of secure, valuable transactions in our increasingly digital world.

Key Insights:

GAN (Global Acceptance Network) aims to establish a new trust layer for the internet, ensuring secure and seamless digital credential exchanges. The internet was not built with trust in mind; GAN seeks to address this by building a neutral governance system. GAN is working with ecosystems across different industries to standardize how digital credentials are issued and verified. Unlike Visa, GAN won’t be providing the “rails” for transactions but will facilitate standardized conversations around digital trust. Trust registries are foundational to GAN’s strategy, enabling ecosystems to verify credentials consistently. The importance of building business, governance, and technology together for sustainable trust networks. Strategies: Focusing on business, governance, and technology in that order, to unlock commercial value and ensure long-term sustainability. Operationalizing the Trust Over IP framework by applying it in real-world ecosystems. Building trust registries to answer simple yet critical questions like “Is entity X authorized to do Y?” Chapters: 00:00 – What is the Global Acceptance Network (GAN) 08:35 – What value does GAN bring to whom? 12:05 – Comparing GAN to VISA and SWIFT 17:39 – Does GAN need to become a consumer brand? 22:49 – Who are GAN Members? What type of relationship will they have with GAN? 27:07 – How does GAN differ from and complement National Digital ID programs? 36:16 – How is GAN managing its own internal governance? 40:19 – What is GAN’s business model? 46:27 – How do Trust Registries fit into GAN? 53:29 – What is next for the GAN? Additional resources: Episode Transcript The Global Acceptance Network Bhutan Joins GAN One from Many: VISA and the Rise of Chaordic Organization Global DPI Summit Trust Over IP Foundation SWIFT OpenWallet Foundation Decentralized Identity Foundation W3C About Guests

Darrell O’Donnell is the Executive Director of the Global Acceptance Network (GAN), a pioneering initiative focused on establishing a trust layer for the internet. Darrell is a technology company founder, executive, investor, and advisor who specializes in helping organizations—both large and small—operationally deploy emerging technologies. With a focus on solving the challenges of life-critical and mission-critical systems, he excels in environments where interoperability and collaboration between multiple players, often with no clear central authority, are essential.

As a leader in the digital trust ecosystem, Darrell chairs multiple standards and interoperability efforts with organizations such as the Trust Over IP Foundation, Sovrin Foundation, Decentralized Identity Foundation, and W3C. He also advises numerous startups, large corporations, senior government leaders, and investors, leveraging his expertise to guide the development and deployment of digital trust solutions across industries.  LinkedIn

  The post The Global Acceptance Network (GAN) (with Darrell O’Donnell) appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post <strong>The Global Acceptance Network (GAN)</strong> (with Darrell O’Donnell) appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Extrimian

Extrimian Challenge at the DIF Hackathon 2024

We’d like to invite you to participate in the Extrimian Challenge at the DIF Hackathon, where you’ll have the opportunity to develop a secure hotel check-in system using Verifiable Credentials (VC). The challenge is to design a system where hotels can verify the digital passport of guests, issued by their country’s government, ensuring a seamless, […] The post Extrimian Challenge at the DIF Hack

We’d like to invite you to participate in the Extrimian Challenge at the DIF Hackathon, where you’ll have the opportunity to develop a secure hotel check-in system using Verifiable Credentials (VC).

The challenge is to design a system where hotels can verify the digital passport of guests, issued by their country’s government, ensuring a seamless, secure, and privacy-focused check-in process.

Challenge details: https://identity.foundation/hackathon-2024/docs/sponsors/extrimian/

Join this transformative event in digital identity management

The Decentralized Identity Foundation (DIF) and Extrimian are thrilled to invite developers and tech enthusiasts to the Extrimian Challenge during the DIF Hackathon this October. This event is dedicated to revolutionizing how the travel and hospitality industry manages secure, seamless guest check-ins using the power of Self-Sovereign Identity (SSI) technology.

Hackathon DIF | Event Highlights: Secure Hotel Check-In System Design: Utilize the potential of Verifiable Credentials (VCs) to enhance privacy and security in hotel guest management. Interactive Workshop: Join us on October 10th for an in-depth session on designing key components for identity and information verification. Reserve your seat now! Valuable Prizes: Compete for $1,000 USD and up to $1,800 in credits on the Extrimian Platform.

Join Our Community: Engage with other participants and experts in our Discord channel for insights and support throughout the hackathon.

Source: https://www.eventbrite.com/e/designing-components-for-secure-identity-and-information-verification-tickets-1031701084717 Practical Experience and Prizes for Extrimian Challenge: 1st Place: $1,000 USD and $1,800 in Extrimian Platform credits. 2nd and 3rd Places: Significant credits on our platform, empowering you to develop more innovative solutions.

Get Involved: Discover our collaboration with DIF and how we’re driving the adoption of decentralized identity technologies to create a safer digital world. Learn more about our partnership with DIF here.

We eagerly await your innovative solutions that will push the boundaries of digital identity verification. Let’s build the future of secure travel together!

Other Resources and Links: Extrimian & DIF: https://extrimian.io/wikis/decentralized-identity-foundation-dif/ Challenge: https://identity.foundation/hackathon-2024/docs/sponsors/extrimian/ Workshop (Thursday, October 10; 1 – 2pm GMT-3): https://www.eventbrite.com/e/designing-components-for-secure-identity-and-information-verification-tickets-1031701084717?aff=oddtdtcreator Extimian Platform – IDConnect: https://idconnect.extrimian.com/ Extrimian Academy: https://extrimian.io/es/academy/ Previous collaboration between Extrimian & DIF Hack-Along: https://extrimian.io/courses/extrimian-and-dif/ Documentation: https://docs.extrimian.com/en/https://github.com/extrimian Practical implementation: https://youtube.com/playlist?list=PL8tv4_kgykip5IbaHoEQrHdSnzGfr-mfa&si=VYcL-r1Yjm7LUtnZ SSI Virtual Assistant: https://trusty.ai.copilot.live/ QuarkID wallet (Android): https://play.google.com/store/apps/details?id=com.quarkid QuarkID wallet (IOS): https://apps.apple.com/ar/app/quarkid/id6450680088

The post Extrimian Challenge at the DIF Hackathon 2024 first appeared on Extrimian.


KuppingerCole

Security Orchestration, Automation and Response (SOAR)

by Alejandro Leal This report provides an overview of the Security Orchestration Automation and Response (SOAR) market and a compass to help you find a solution that best meets your needs. By examining market trends, product functionalities, relative market share, and innovative approaches, this document serves as a definitive guide to understanding the current landscape and selecting the SOAR sol

by Alejandro Leal

This report provides an overview of the Security Orchestration Automation and Response (SOAR) market and a compass to help you find a solution that best meets your needs. By examining market trends, product functionalities, relative market share, and innovative approaches, this document serves as a definitive guide to understanding the current landscape and selecting the SOAR solution that most effectively addresses your organization's specific challenges.

Privileged Access Management

by Paul Fisher This KuppingerCole Leadership Compass provides an overview of insights on the leaders in innovation, product features, and market reach for Privileged Access Management (PAM). These vendors use a variety of software tools to enable organizations to control, and monitor privileged access to endpoints, servers, applications, and cloud resources. Products include those that offer basic

by Paul Fisher

This KuppingerCole Leadership Compass provides an overview of insights on the leaders in innovation, product features, and market reach for Privileged Access Management (PAM). These vendors use a variety of software tools to enable organizations to control, and monitor privileged access to endpoints, servers, applications, and cloud resources. Products include those that offer basic PAM capabilities such as password vaulting and management, up to platforms that offer most capabilities including some Cloud Infrastructure Entitlement Management (CIEM) capabilities.

Ocean Protocol

DF109 Completes and DF110 Launches

Predictoor DF109 rewards available. DF110 runs Oct 3 — Oct 10, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 109 (DF109) has completed. DF110 is live today, Oct 3. It concludes on October 10. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rew
Predictoor DF109 rewards available. DF110 runs Oct 3 — Oct 10, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 109 (DF109) has completed.

DF110 is live today, Oct 3. It concludes on October 10. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF110 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF110

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF109 Completes and DF110 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

A Guide to PingIDM

Effective lifecycle and relationship management with PingIDM personalizes user experiences, enhances security, and reduces administrative burdens.

Greetings valued Ping customer! Managing vast arrays of digital identities can sometimes feel like navigating a maze. But don't worry—PingIDM is here to help you bring order to that chaos once and for all. As your guide through the digital labyrinth of scattered identity data, we’re here to show you how you can simplify identity management while enhancing your business outcomes. And I promise—no boring tech jargon (okay, maybe just a little, but I'll make it fun!).


DHIWay

Dhiway: A Journey Rooted in Knowledge and Trust

When we set out to create Dhiway, we sought a name that would embody the depth of our vision and the clarity of our mission. We wanted a name that would represent not just a combination of letters but a philosophy, a journey, and a commitment to building trust in an increasingly digital world. We […] The post Dhiway: A Journey Rooted in Knowledge and Trust appeared first on Dhiway.

When we set out to create Dhiway, we sought a name that would embody the depth of our vision and the clarity of our mission. We wanted a name that would represent not just a combination of letters but a philosophy, a journey, and a commitment to building trust in an increasingly digital world. We aimed to create something that resonated deeply with the essence of knowledge, trust, and purpose. That’s when we turned to one of the most profound Spiritual sources – the Gayatri Mantra.

The word “Dhi” (धी) in Sanskrit means knowledge, wisdom, and understanding. It is a central concept in the Gayatri Mantra:

“Om Bhur Bhuvaḥ Svaḥ 
Tat Savitur Vareṇyaṃ   
Bhargo Devasya Dhīmahi  
Dhiyo Yo Naḥ Prachodayāt.”

The mantra is a prayer to the divine light of knowledge and wisdom, asking for the illumination of our minds. It is this “Dhi” that represents the quest for higher knowledge, intellectual clarity, and the ability to discern the right path. In the context of our company, “Dhi” reflects our focus on building systems based on truth, transparency, and integrity—values that guide us in creating technology that can be trusted.

The second part of our name, “Way,” serves a dual purpose. On the one hand, it represents the path—the journey of discovery, learning, and collaboration that every organization and Individual embarks on. Conversely, it resonates with a deeper question: “Who are you?” It’s a reminder to stay grounded in self-awareness, to constantly ask ourselves whether we are on the right path, and to align our actions with our purpose.

Thus, Dhiway was born, a fusion of the ancient and the modern. While “Dhi” draws from the wisdom of the ages, “Way” looks forward, symbolizing progress, purpose, and the drive to make a meaningful impact.

Leading the Future of Trust Infrastructure

Dhiway, as a name, is not just symbolic—it’s a reflection of the leadership role we have undertaken in open trust infrastructure technology. In a world where digital interactions are growing at an unprecedented pace, trust has become a critical currency. Whether it’s governments, corporations, or individuals, every stakeholder must be sure that the systems they rely on are secure, verifiable, and trustworthy.

At Dhiway, we collaborate with diverse partners to build solutions that deliver absolute certainty in the digital realm. Our work with cutting-edge blockchain technology and open standards ensures that data, identity, and digital transactions are managed with the highest levels of security and assurance. This aligns with our core philosophy: knowledge is not just about information; it’s about trust and integrity.

A Path Forward

As we continue to grow, the name Dhiway is a constant reminder of our chosen path—a path defined by knowledge, wisdom, and trust. In everything we do, from designing digital credentials to enabling secure data exchanges, we strive to stay true to these principles.

Our name is not just a label. It’s a guiding force that pushes us to lead, innovate, and create a world where trust is the foundation of every interaction. Whether it’s creating digital credentials that ensure privacy or working with open standards for data exchanges, Dhiway’s solutions are designed to bring certainty where it’s needed most. At its core, our work reflects the values encapsulated in our name: knowledge, integrity, and a clear path forward.

Our name reflects this journey — one of enlightenment, truth, and unwavering commitment to building a better, more trusted world.

The post Dhiway: A Journey Rooted in Knowledge and Trust appeared first on Dhiway.

Wednesday, 02. October 2024

Ocean Protocol

Season 6 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 6 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀 📜Program Structure Season 6 of the Ocean Zealy Community Campaign will feature more engaging tasks and ac

We’re happy to announce Season 6 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀

📜Program Structure

Season 6 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 31st of October 12:00 PM UTC

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 6 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Crypto regulatory affairs: US Treasury targets Russia-linked crypto exchanges in cybercrime crackdown

On September 26, the United States Treasury took action to shut two Russia-linked cryptoasset exchanges out of the financial system owing to their role in facilitating money laundering for cybercriminals and fraudsters. 

On September 26, the United States Treasury took action to shut two Russia-linked cryptoasset exchanges out of the financial system owing to their role in facilitating money laundering for cybercriminals and fraudsters. 


Ontology

Ready to Hack the Future of Digital Identity? Join Ontology’s Challenge at the DIF Hackathon 2024

If you’re a developer who wants to help shape the future of decentralized identity — and maybe take home part of a $70,000 prize pool — the Decentralized Identity Foundation (DIF) Hackathon 2024 is where you need to be. Running from October 1st to November 4th, this event brings together creators, innovators, and coders from around the world to redefine digital identity through cutting-edge soluti

If you’re a developer who wants to help shape the future of decentralized identity — and maybe take home part of a $70,000 prize pool — the Decentralized Identity Foundation (DIF) Hackathon 2024 is where you need to be. Running from October 1st to November 4th, this event brings together creators, innovators, and coders from around the world to redefine digital identity through cutting-edge solutions.

Ontology is stepping up with a challenge that puts the spotlight on ONT Login, our decentralized authentication tool. We’re inviting developers to take it for a spin, build something amazing, and demonstrate just how easy it is to integrate decentralized identity into real-world applications. Whether you’re building for Web2 or Web3, ONT Login is here to make decentralized authentication simple and secure.

The Hackathon: A Platform for Innovation

The DIF Hackathon is no ordinary coding event. With tracks covering Education, Reusable Identity, Travel, and Zero Knowledge Proofs (ZKPs), this hackathon offers endless opportunities for both seasoned developers and newcomers to showcase their skills. Plus, with $70,000 up for grabs, this is the perfect chance to innovate, collaborate, and push the boundaries of what’s possible with decentralized identity.

Ontology’s Challenge: Show Us What You Can Build with ONT Login

At the core of Ontology’s hackathon challenge is ONT Login, a decentralized universal authentication solution that empowers developers to integrate secure, privacy-first login functionality into their apps. With ONT Login, users can log in seamlessly without sacrificing control over their data — a crucial step forward in a world where privacy is increasingly under attack.

We’re challenging you to:

Create a repository for ONT Login’s technical documentation or SDKs to help make the tool even more accessible to the broader developer community. Show us a demo of ONT Login integrated into an existing app, adding it as one of your user login methods. Whether it’s a Web2 or Web3 app, we want to see ONT Login in action, providing a glimpse into a future where decentralized authentication is the norm, not the exception. Why ONT Login Matters

In the age of constant data breaches, the need for reusable, self-sovereign identity has never been greater. ONT Login is fully open-source and supports multi-SDKs, making it flexible enough to fit into any project while protecting user privacy. It’s not just a technical solution — it’s a statement. A statement that users deserve control over their personal data, and that decentralized identity can deliver on the promise of a more secure digital future.

Need some resources to get started? You’ve got everything you need right here:

Official Website Documentation Back-end SDK Front-end SDK

Need more help? Join Ontology’s Hackathon Challenge presentation for a detailed walkthrough of ONT Login’s integration:

Join Our Session

For specific support during the hackathon, hop into our dedicated Ontology Support Discord channel:

Join Ontology’s Discord How to Join the Hackathon

Ready to get involved? Here’s how:

Register for the DIF Hackathon on DevPost. Check out the DIF Hackathon details and sign up for educational sessions to help sharpen your skills. Join the DIF Discord community to connect with other developers, share ideas, and get feedback.

This is your chance to collaborate, learn, and make a real impact on the future of digital identity. And if you’re lucky, you might walk away with a piece of the prize pool too.

Why You Should Care

Let’s be real — our digital identities are under constant threat. Every week, there’s another data breach, another scandal involving companies selling our personal information to the highest bidder. ONT Login is Ontology’s answer to that mess. It gives users control over their own data, ensuring privacy without compromising convenience. And now, we’re handing it over to you, the developers, to show the world what decentralized authentication can really do.

This isn’t just about winning a hackathon. It’s about proving that we don’t have to settle for the status quo in digital identity. It’s about showing that privacy and security can coexist with ease of use, and that decentralized identity is the way forward.

So, what are you waiting for? Join the DIF Hackathon, take the ONT Login Challenge, and be part of the next wave of innovation in decentralized identity.

Happy hacking, and may the best builder win.

Ready to Hack the Future of Digital Identity? Join Ontology’s Challenge at the DIF Hackathon 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

5 reasons why digital identities will revolutionize business in 2025 and beyond.

Companies are transforming the way their customers access and reuse their services. Is it time you explored reusable identities?  Managing customer accounts and customers’ online identities has become a major challenge for businesses and customers alike.   While there have been significant strides over the last decade regarding remote identity verification, with a variety of automate
Companies are transforming the way their customers access and reuse their services. Is it time you explored reusable identities? 

Managing customer accounts and customers’ online identities has become a major challenge for businesses and customers alike.  

While there have been significant strides over the last decade regarding remote identity verification, with a variety of automated and expert-led identity verification solutions now available, most options still require repeat verification. For example, each time a customer wishes to open an account or register for a highly regulated service, they need to complete an identification process, which is far from ideal from a business or customer experience standpoint. 

However, with digital identity solutions like IDnow’s YRIS, customers can verify and reuse their identity instantly and effortlessly across a multitude of industries and regulated services. 

Plus, with YRIS’s recent Substantial Level of Assurance (LoA) certification, it is now recognized as providing a level of identity assurance equivalent to face-to-face verification.

It is highly likely that digital identity will rapidly become the norm for accessing a multitude of online services, for citizens and businesses alike.

Marie Duval, Product Manager for YRIS at IDnow.
Government green light. Which countries have digital identities? 

There are currently 2.5 billion people around the world with digital identities, but that figure is expected to reach 4 billion by 2026. India lays claim to the world’s largest biometric ID system. Aadhaar is used by 99% of Indian adults (1.3 billion people) for tasks like opening bank accounts and obtaining SIM cards.  

In Europe, one of the first nations to fully embrace digital identity verification was Estonia. In fact, E-ID and its digital signature service was launched over 20 years ago, in 2002. Use cases include e-prescriptions, and i-voting, which is used by approximately 33% of Estonians, wherever they are, to cast votes. 

The success of a nation’s digital identity strategy comes largely down to how strongly the private and public sector pushes for its adoption. For example, in France, digital identity is driven by a combination of ambitious public initiatives and innovative private solutions like YRIS.

Identity crisis? The future of digital identity in the UK. Download to discover what the the Digital Identity and Attributes Trust Framework means for the future of identity verification in the UK. Get your free copy

The UK’s digital identity approach has been more hesitant and nuanced. In 2022, it released its Digital Identity and Attributes Trust Framework – a set of rules and standards designed to establish trust in digital identity products. 

There are hundreds of use cases across the public and private sectors where digital identities could be used to optimize the user experience. For example, account opening in Banking, compliance checks in Crypto, age verification in Mobility, streamlined check-in processes in Travel, financial risk checks in Gambling, and contract signing in Telecommunication. The UK government has decided to start off with just a few use cases: Right to Work, Right to Rent, and DBS Checks. 

As these solutions improve and their adoption grows, it is highly likely that digital identity will soon become the standard for accessing a multitude of online services, both in the public and the private sector. 

The world of economic, societal and political possibilities opened by digital identities is vast and varied. According to research from the McKinsey Global Institute, countries that implement digital identities could unlock between 3-13% of GDP by 2030. Broad adoption of an interoperable digital ID system will increase inclusion, and provide greater access to finance, health and other essential services. 

Check out below for the top five benefits of digital identity usage for businesses and users. 

1. Enhanced security and protection against identity theft. 

One of the main advantages of digital identity lies in the enhanced security it provides. With advanced methods like multi-factor authentication and the use of biometrics, the risks of fraud and identity theft are significantly reduced. These technologies ensure that the user is who they claim to be, thus limiting the chances of identity theft. 

In addition, digital identity helps to better protect users’ sensitive personal data. By centralizing the management of this data and relying on strict security standards, companies can ensure better protection of personal information, which is crucial in a world increasingly exposed to cyber threats. 

2. Improved operational efficiency.   

Along with strengthening security, digital identity also improves operational efficiency within organizations by automating processes and reducing processing times, such as during new client onboarding. 

By using centralized platforms for identity management, businesses can reduce costs associated with manual management. These gains in efficiency improve productivity and tracking and reduce human errors. 

3. Optimized user experience.   

One of the most appealing aspects of digital identity is the improved user experience. With solutions such as Single Sign-On, users can access various services without having to juggle multiple usernames and passwords. This simplifies their access to platforms while minimizing friction during online transactions. 

Digital identity also allows for greater personalization of services and interactivity. By better understanding the user, businesses can offer tailored services that meet their specific needs. This enhances customer loyalty while providing a seamless and intuitive experience, resulting in overall higher conversion rates. 

4. Adhere to regulatory compliance. 

Certifications such as the LoA guarantee an equivalent to face-to-face identity verification, while compliance with the eIDAS regulation (electronic IDentification, Authentication and trust Services) and certification by ANSSI (French Cybersecurity Agency) ensure digital identity solutions follow the highest European standards in terms of security and personal data protection. 

Businesses that comply with rules, regulations and laws reassure their clients that they’re taking the confidentiality of their personal information seriously. Compliance with European regulations is a strong testament to the rapid adoption of digital identity in sectors such as finance, healthcare and public administration. 

5. Undeniable competitive advantage.  

Digital identity offers a significant competitive advantage. By using a solution like YRIS, it allows access to new technologies and innovative solutions, while providing the flexibility needed to quickly adapt to market changes. 

By offering a smooth and secure user experience, businesses can more easily retain customers, who are more likely to return to services where they feel confident, where their identity is protected and where the experience is simple and intuitive. 

YRIS: Transforming the digital future of France.  

Digital identity, powered by solutions like YRIS, is profoundly transforming the management of online identity in France and beyond. With enhanced security, compliance with European regulations and advanced fraud protection, YRIS stands out as a simple and trusted solution. Available 24/7, it optimizes operational efficiency through automated processes, unlimited reuse of identity and a simplified user experience. 

 By easily integrating with existing systems and addressing an inclusive customer base, YRIS offers a comprehensive response to modern digital identity management challenges. 

Discover more about the benefits of digital identities in our blog, ‘Why the UK is banking on digital identity in 2023’. 

By

Mallaury Marie
Content Manager chez IDnow
Connect with Mallaury on LinkedIn


Finema

This Month in Digital Identity — October Edition

This Month in Digital Identity — October Edition Welcome to the October edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. We’ll delve into significant advancements in decentralized identity, the balance between regulation and privacy, the role of biometric tech
This Month in Digital Identity — October Edition

Welcome to the October edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. We’ll delve into significant advancements in decentralized identity, the balance between regulation and privacy, the role of biometric technology in hiring compliance, and the establishment of security standards for digital ID wallets in the EU.

Here’s a closer look at the essential topics we’ll be covering:

Advancing Decentralized Identity with the SLAP Framework

Velocity Network has dedicated the past five years to developing the Internet of Careers, focusing on essential business needs through the SLAP framework. This innovative approach emphasizes four critical components:

Survivable Credentials: These credentials are designed to remain valid and accessible over time, ensuring that users can reliably present their identities without facing barriers. Legal Risk Mitigation: By addressing potential legal challenges associated with identity verification, organizations can significantly reduce their exposure to regulatory pitfalls, fostering a more secure environment for both users and businesses. Accreditation for Issuers and Relying Parties: Establishing trusted standards for all participants in the identity ecosystem helps to enhance credibility and build trust among users. Practical Privacy: Prioritizing user privacy ensures that individuals maintain control over their personal information, which is essential in today’s digital landscape.

Velocity Network’s collaborative efforts invite stakeholders from various sectors to contribute to effective decentralized identity solutions. By working together, we can empower individuals with greater control over their identities and foster a more inclusive digital ecosystem.

Navigating the Tension Between Decentralized Identity and Regulation

In the ever-evolving digital landscape, the interplay between decentralized identity and regulatory frameworks has become increasingly critical. High-profile cases such as Silk Road and Tornado Cash highlight the challenges of balancing innovation with compliance.

To address these challenges, it is essential to adopt a balanced approach that fosters the development of decentralized reputation systems. Such systems can empower self-regulation while ensuring both privacy and accountability. By leveraging anonymous identities, we can create a framework where individuals have control over their digital presence while participating responsibly in digital platforms.

This approach not only enhances user empowerment but also helps build trust within communities. Learning from past experiences with regulatory challenges can inform the design of more resilient and adaptable decentralized identity systems. By understanding the nuances of this complex relationship, we can pave the way for innovative solutions that respect both freedom and the need for regulation.

Enhancing Hiring Compliance in the UK with Yoti Biometrics

Yoti is making significant strides in the UK hiring landscape by integrating biometric technology with Sterling’s background checks. This partnership aims to streamline compliance and enhance the security and accuracy of identity verification during the hiring process.

By utilizing Yoti’s biometric solutions, employers can simplify the compliance process, ensuring that they meet regulatory requirements efficiently. This integration not only reduces the risk of non-compliance but also enhances security, making it more difficult for fraudulent activities to occur.

Candidates benefit from this system as well, enjoying a smoother onboarding experience. The biometric verification process is designed to be quick and user-friendly, allowing job seekers to complete identity checks seamlessly. This innovative approach not only improves the overall efficiency of the hiring process but also instills greater confidence among employers and candidates alike.

As organizations increasingly recognize the value of biometric technology, Yoti’s integration with Sterling’s background checks stands as a promising development for the future of hiring compliance in the UK.

ENISA to Launch Cybersecurity Certification Scheme for EU Digital ID Wallets

In a significant move to bolster security in the digital identity landscape, the European Union Agency for Cybersecurity (ENISA) is set to establish a cybersecurity certification scheme for the EU’s digital ID wallets. This initiative aims to ensure that digital identity solutions meet high standards of security and trustworthiness, thereby promoting consumer confidence in these technologies.

The certification scheme will provide a robust framework for assessing and validating the security measures implemented in digital ID wallets. By aligning with EU regulations and standards, this initiative supports the broader strategy of creating a secure and interoperable digital identity ecosystem within the EU.

ENISA emphasizes the importance of collaboration with various stakeholders, including industry leaders and governmental bodies, to develop a comprehensive certification process. This collaborative approach is crucial for addressing the diverse needs and challenges in the digital identity landscape.

By fostering trust in digital identity solutions, this initiative paves the way for increased adoption and reliance on secure digital services across the EU.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Together, we can contribute to a more secure and inclusive digital future.

This Month in Digital Identity — October Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Endpoint Protection Detection & Response (EPDR)

by John Tolbert This report provides an overview of the Endpoint Protection Detection & Response (EPDR) market and a compass to help you find a solution that best meets your needs. It examines solutions that provide pre-execution malware identification and prevention, endpoint firewall, system file integrity monitoring, application controls, URL filtering, compromise detection threat hunting,

by John Tolbert

This report provides an overview of the Endpoint Protection Detection & Response (EPDR) market and a compass to help you find a solution that best meets your needs. It examines solutions that provide pre-execution malware identification and prevention, endpoint firewall, system file integrity monitoring, application controls, URL filtering, compromise detection threat hunting, forensic analysis, reporting, alerting, and manual and automated response capabilities for endpoints of various types

Analyst's View: Cloud Backup for AI Enabled Cyber Resilience Solutions

by Anne Bailey To achieve cyber resilience, organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur. To achieve this, it is essential to backup, protect, and restore not only the business data, but also the data which defines today’s virtual and cloud IT infrastructure, and

by Anne Bailey

To achieve cyber resilience, organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur. To achieve this, it is essential to backup, protect, and restore not only the business data, but also the data which defines today’s virtual and cloud IT infrastructure, and applications.

PingTalk

TX-RAMP and IAM secure Texas data

Explore Ping Identity’s TX-RAMP Level II certification serves the IAM needs of Texas state agencies and institutions more securely and effectively.

Ping Identity’s Texas Risk and Authorization Management Program (TX-RAMP) Level II certification represents a significant milestone in our commitment to security, allowing us to serve Texas state agencies and institutions more effectively.

 

With a powerful, flexible IAM platform that meets the state’s stringent standards, Ping Identity is proud to play a pivotal role in securing Texas’s digital future. Whether you're modernizing your IAM strategy or embracing a zero-trust security framework, Ping Identity can provide the tools you need to protect identities and data while staying compliant with the highest security standards.


TBD

Known Customer Credential Hackathon

Participate in this hackathon to issue a Known Customer Credential and streamline KYC across payment apps.

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications.

Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to issue a Known Customer Credential (KCC) to a customer, Alice, who you have already completed KYC on. You will store the JWT representing the KCC in Alice’s Decentralized Web Node so that she can present it to your business from any payment app.

Challenge Create a Decentralized Identifier (DID) and DWN to use as the Issuer. Bonus: Use the DIF community DWN instance hosted by Google Cloud. Issue Alice a KCC that includes evidence. Note that for this challenge, you do not need to implement an actual identity verification flow. Install the VC Protocol onto your DWN so that you can communicate with Alice’s DWN. Obtain permission to write to Alice’s DWN by sending a GET request to: https://vc-to-dwn.tbddev.org/authorize?issuerDid=${issuerDidUri}
Store the VC JWT of the KCC as a private record in Alice’s DWN. Submit

To enter a submission for this hackathon, provide the DWN Record ID of the KCC.

Resources Alice’s DID: did:dht:rr1w5z9hdjtt76e6zmqmyyxc5cfnwjype6prz45m6z1qsbm8yjao web5/credentials SDK web5/api SDK How to create a DID and DWN with Web5.connect() Obtain Bearer DID - required to sign KCC Known Customer Credential Schema How to issue a VC with Web5 Example of issuing a KCC with Web5 Example of issued KCC How to install a DWN Protocol How to store a VC in a DWN Contact Us

If you have any questions or need any help, please reach out to us in our #kcc-hackathon channel on Discord.

Tuesday, 01. October 2024

IdRamp

Account Takeover in Healthcare: How to Deliver Security and Trust

Recent warnings from the U.S. Department of Health and Human Services highlight the alarming surge in ATO incidents targeting healthcare and public health organizations The post Account Takeover in Healthcare: How to Deliver Security and Trust first appeared on Identity Verification Orchestration.

Recent warnings from the U.S. Department of Health and Human Services highlight the alarming surge in ATO incidents targeting healthcare and public health organizations

The post Account Takeover in Healthcare: How to Deliver Security and Trust first appeared on Identity Verification Orchestration.

KuppingerCole

Transforming Access Management: Strategies for the New Digital Landscape

In today's rapidly evolving digital landscape, organizations face increasing complexity in managing application access. The proliferation of diverse applications, coupled with the end-of-life (EOL) for traditional solutions like Oracle and SAP GRC, necessitates a reevaluation of access governance strategies. Traditional methods often fall short in addressing these challenges, requiring a shift tow

In today's rapidly evolving digital landscape, organizations face increasing complexity in managing application access. The proliferation of diverse applications, coupled with the end-of-life (EOL) for traditional solutions like Oracle and SAP GRC, necessitates a reevaluation of access governance strategies. Traditional methods often fall short in addressing these challenges, requiring a shift towards more comprehensive and integrated approaches.

Modern technology offers innovative solutions to these issues. Organizations must adopt tools that support a wide range of applications, ensuring seamless integration and consistent delivery of access governance. By embracing these advanced solutions, businesses can achieve fine-grained entitlement management and enhance overall security posture.

Martin Kuppinger, Principal Analyst at KuppingerCole, will discuss the changing landscape of Application Access Governance and Application Risk Management. He will explore the convergence with Identity Governance and Administration (IGA), examine various scenarios, and evaluate the applicability of different types of solutions.

Vinit Shah, VP Product Management at Saviynt, will address the specific challenges organizations face with their governance programs. He will highlight the need for fine-grained entitlement management and discuss the unique strengths of Saviynt's solution in delivering consistent governance across diverse applications.




Indicio

New industry report highlights Indicio’s masterful innovation in biometric digital identity for travel and hospitality sectors

The post New industry report highlights Indicio’s masterful innovation in biometric digital identity for travel and hospitality sectors appeared first on Indicio.
Analyst firm Acuity Market Intelligence’s The Prism Project reports that the market for biometric digital identity in travel is expected to grow at a compound annual growth rate of 92% and generate over $72 billion dollars globally by 2028. We take a look at key points from the report, how the industry is growing, and the next steps with decentralization.

By Tim Spring

Biometrics and digital identity 

As travelers increasingly expect to be able to do almost anything from the comfort of their home and the convenience of their smartphones, biometrics and digital identity are central to meeting these expectations of seamless digital travel.

To make this seamless world a reality requires a single digital identity that will work across platforms and unify the traveler’s journey from airport to destination and back again — and be capable of integrating ancillary travel and tourist services.

In a way, the technology goal is similar to how it is possible to login to different websites using a federated identity, such as a Google account. But it differs in two important aspects: One, this digital identity is derived from government systems of record, such as a passport, and not a third-party identity provider; and two, you control and store this identity and the personal data associated with it, and not a third-party identity provider.

These features are critical for privacy and privacy compliance (the traveler always has the power of consent to sharing data) and security (removing the centralized storage of personal data, especially biometric data, removes the risk of mass data breaches, identity fraud, and catastrophic loss of trust).

The emergence of technology solutions that meet these requirements is explored in the new 2024 Biometric Digital Identity Travel and Hospitality Prism Report from Acuity Market Intelligence. The report, which first launched in 2023, analyzes the state of the solution market and sets out an evaluative framework for what is working best to deliver seamless travel. In sum, it is technology that “puts human beings first,” namely:

Digital identity belongs to the user it describes. True ID empowerment relies on government systems of record. Identity must be consistently and continuously orchestrated to remain secure. Biometrics must be at the core of any sustainable digital identity ecosystem.

“By investing in biometric digital identity solutions like those identified in [the report], travel and hospitality stakeholders will find measurable benefits—from improved guest flow, to bulletproof compliance, to secure loyalty programs. But beyond the immediately tangible results, participating in the biometric digital identity ecosystem has a wider, global effect.”

The report highlights the work of  Indicio and its partner SITA, for developing solutions that “masterfully deploy biometric guest experiences around the globe.”

Indicio and SITA created the first successfully deployed Digital Travel Credential for seamless border crossing. By using Verifiable Credential technology, travelers were able to turn their passports into “government-grade” digital identities for instant, frictionless, authentication. A key feature of the Indicio-SITA credential solution is the ability to bind the biometrics in a passport to the rightful owner of that passport. This, in effect, created a two-factor biometric authentication without the need for airports or airlines to store biometric data.

To learn more about “bring your own biometrics,” and how Verifiable Credentials enable seamless data sharing, contact us for a demo — or book a free workshop where we’ll analyze your use case.

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post New industry report highlights Indicio’s masterful innovation in biometric digital identity for travel and hospitality sectors appeared first on Indicio.


Datarella

Logistics Tracking and Communication for the Real World!

We’re excited today to reveal the hardware and a bit of software magic behind Track & Trust and announce a milestone achievement on our way to commercializing the product! Track […] The post Logistics Tracking and Communication for the Real World! appeared first on DATARELLA.

We’re excited today to reveal the hardware and a bit of software magic behind Track & Trust and announce a milestone achievement on our way to commercializing the product! Track & Trust is a logistics tracking and communication suite toolset. This post dives into basics of the hardware and the software that drives it as well as some of the unique capabilities of the system. We’ll be following up with a series of posts on the tech and also on the pilot in the field!

We’ve just passed our “Site Acceptance Test” with the European Space Agency. This test evaluates the entire technical range of functionalities expected for the Track & Trust system. Consequently, we’re proud to announce that we passed all the tests in the three-day site acceptance test inspection. We tested all thirty-four formal technical requirements of the system. Although the testing took place in June, we had to keep quiet about it for operational reasons. Meanwhile, our team has been working hard to prepare for the next phase.

We’re building the ultimate communications system with our partners at Weaver Labs and Ororatech. Specifically, Track & Trust relies on “magical” black boxes that make logistics tracking and communication possible in the worst conditions. These boxes can withstand internet outages, power outages, and wet conditions. You can plug them into a truck and they’ll keep working. In addition, they’re designed to be user-friendly and easy to integrate with existing systems.

What’s in the black box?

Our partners at Weaver Labs built hardware with a Swiss Army knife of multi-bearer communications capabilities. It doesn’t need the internet to load the software. The boxes pack 4G radios, two WiFi radios, and Weaver Labs clever cellmesh software capabilities. These capabilities enable them to talk to each other offline, exchange information, and post when possible. They can post using traditional 4G or the included satcom uplink from Ororatech. Furthermore, this technology allows for seamless communication between devices, even in areas with limited connectivity.

We at Datarella have built a suite of interfaces into Track & Trust that any user can leverage to contribute cryptographically signed inputs about logistics events. Our system has real-time monitoring and CI/CD capabilities that enable us to fix issues on the fly. This is a plug-and-play solution for gathering more information about shipments. We can funnel this information directly into logistics service providers’ databases using our APIs. As a result, our customers can enjoy greater visibility and control over their supply chains.

Value for Logistics Tracking Organizations

Track & Trust generates secure, validated events with every user interaction. These events are geolocated with GNSS and timestamped. We anchor them to the fetch.ai blockchain via a hashing mechanism, resulting in a powerful combination of privacy and immutability. Moreover, this ensures that all data is tamper-proof and transparent.

We delayed posting about it publicly because we needed to move quickly and get the devices into our planned piloting environment in Lebanon. This required sensitive handling. Now we’re making progress on the pilot operations in the field, despite difficult conditions. Finally, we’re excited to share more details from the summer and look forward to continuing our work on this groundbreaking project.

The post Logistics Tracking and Communication for the Real World! appeared first on DATARELLA.


PingTalk

How to Orchestrate Risk and Fraud Services Into User Journeys

Ping’s orchestration capabilities and PingOne Protect are the tools your need to orchestrate user journeys with context and risk.

Imagine a digital world where every user experience is smooth, secure, and seamless. No more clunky logins or frustrating security hoops—just pure, uninterrupted interaction. At Ping, we’re on a mission to make this vision a reality. We know that as our valued customers, you understand the challenges of managing diverse digital journeys and integrating risk services. That's why our cutting-edge orchestration solutions transform these challenges into your organization’s greatest strengths. Here’s how Ping can help you orchestrate user journeys that leverage context, risk signals, and all your many risk and fraud investments. Hopefully, you already have either PingOne DaVinci or Intelligent Access via PingOne Advanced Identity Cloud or PingAM. If you aren’t utilizing orchestration yet, you can learn more here.

Monday, 30. September 2024

auth0

The Curious “Case” of the Bearer Scheme

A wrong interpretation of the OAuth specifications can lead to hours of debugging and headaches. Learn the details to avoid them.
A wrong interpretation of the OAuth specifications can lead to hours of debugging and headaches. Learn the details to avoid them.

Caribou Digital

Traditional evaluation looks backward; innovation looks forward.

Traditional evaluation looks backward; innovation looks forward. How do we evaluate innovation in real time? Written by Elise Montano and Niamh Barry on the Measurement & Impact team at Caribou Digital. Innovation programs cultivate an environment of experimentation and continuous improvement in developing, implementing, and scaling new ideas, products, or processes to drive growth
Traditional evaluation looks backward; innovation looks forward. How do we evaluate innovation in real time?

Written by Elise Montano and Niamh Barry on the Measurement & Impact team at Caribou Digital.

Innovation programs cultivate an environment of experimentation and continuous improvement in developing, implementing, and scaling new ideas, products, or processes to drive growth. These programs depend on rapid, actionable insights to stay ahead and be ready to pivot strategies and optimize outcomes in real time. However, many traditional evaluation approaches are neither responsive nor adaptive to the speed and focus of insights needed in innovation programs.

In Caribou Digital’s work with Mastercard Strive, we sought opportunities to break away from traditional evaluation models to try a new approach that retained values of timeliness, flexibility, agility, and rigor, with a clear understanding of real-world constraints. From this experience, we devised an evaluative approach to support programs working in dynamic systems to generate impactful and incisive insights that enhance performance and impact.

Traditional evaluation is failing innovation programs.

Evaluations are usually conducted at predetermined moments in a program — for example, mid- or endpoint — rather than when stakeholders need information. Such evaluations are focused on pre-established questions and do not respond to program stakeholders’ dynamic and complex insights needs. They typically focus on accountability and documenting processes, not learning and improving program performance.

Innovative programs need real time information that supports dynamic learning, rapid response, and experimentation for continuous improvement. Traditional evaluation approaches are simply too rigid and fail to address these core needs.

Organizations that deliver complex programs need a better way of getting incisive insights at critical moments while maintaining evaluative rigor.

Our modular evaluation approach works with innovation programs.

Building on formative and developmental evaluation principles, we developed a flexible and agile approach to generating evaluative insights within the Mastercard Strive program. We call this “modular evaluation.” The characteristics of this approach include:

Embedded: Work is led and conducted by evaluation specialists immersed in the program delivery.
>> The Caribou Measurement and Impact team is part of Mastercard Strive program delivery, working daily with program directors, grantees, and partners. We used our detailed knowledge of the program and its complexities, constraints, and learning objectives in evaluations`. Modular: Evaluations are conducted in thematic modules that enable faster, more focused, and concise work.
>> We deployed three thematic modules — 1) small business outcomes, 2) program strategy and governance, and 3) partner management — allowing us to focus entirely on each module in turn. Flexible deployment: Evaluations are delivered as and when insights are needed to support strategic decision-making, not according to a prescribed timeline.
>> We delivered the partner management module with our first phase of programs before developing a second phase so the insights from one could be rolled into the next. We also conducted our small business outcomes module twice, nine months apart, to generate insights when grantees had the most data available. Lean: Evaluations focus only on pertinent questions and data collection methods. They enhance existing data collected through regular reporting with lean data collection where it counts.
>> For each module, we used grantee data from existing reports and filled the information gaps through focused interviews.

The benefits of this approach were immediately evident to our team and clients. We lined up evaluation modules to deploy throughout the project to provide insights at the moment they had the most strategic value.

We identified five key outcomes of this approach based on our experience.

1. Modular evaluations enable precision and flexibility, supporting insights at decisive times.

Our approach acknowledges that some modules or topics may require faster, more focused, and more concise work, or have different internal and external stakeholders reliant on insights. Each module can be managed independently, with its own evaluation questions and analytical frameworks, according to a timeline that best supports decision-making.

In Mastercard Strive, our small business outcomes module was adapted based on the outcomes expected at specific points. For example, the first iteration delivered insights on the impact of strategies for engaging small businesses with various solutions. It suggested where pivots could support deeper engagement and what other types of programs would address gaps in our portfolio. The second iteration — conducted nine months later — assessed early outcomes from our first phase of grantees (e.g., on small business capabilities and uptake of new business practices, products, and services) and revisited solution engagement data to incorporate new results and grantees. Future outcomes modules toward the end of the program will look at long-term outcomes and the sustainability of impacts for small businesses.

Mastercard Strive small business outcomes evaluation module 2. Focused modules support rapid delivery of insights.

Each evaluation module took at most three months to complete, and interim insights were often available within a month of launching data collection. In contrast, traditional evaluations can often take over six months to deliver final insights. Collecting and combining data across multiple themes from a wide range of sources adds complexity to the process of analyzing and presenting that data. In addition to lean data collection, agile approaches allow evaluators to focus on specific topics, dig into the details, and identify more nuanced and detailed insights.

3. Rapid insights support adaptive strategies.

Access to real time learning enables grant and fund managers to be dynamic and responsive, and make evidence-based decisions by working with our measurement and impact team. Our granting strategy evaluation module built on insights gleaned through ad hoc meetings and reporting, leading to a quick — but structured — approach to collecting and analyzing primary data. Within four weeks, our team had mapped the strengths and weaknesses of the granting and grantee management processes. We delivered concise recommendations that immediately fed into our second granting phase, including how we selected, developed, and managed programs.

4. Flexible timing and focused modules support stakeholder recall.

Traditional evaluations often interview stakeholders once on a wide range of topics, making for unwieldy interviews that ask questions about decisions made over a year before. A more flexible approach allowed our teams to conduct shorter, more focused interviews with stakeholders. The interviews were concise, asked questions about recent decisions, and allowed participants to prepare more effectively.

5. Modular evaluations are more cost-efficient.

We found this evaluative approach more cost-efficient than traditional evaluations for three reasons. First, the rapid, iterative nature of modular evaluations supports learning and continuous improvement that reveals opportunities for experimentation and adaptation earlier on, avoiding costly mistakes. Second, modular evaluations are inherently lean. Data collection builds on existing knowledge and is respectful of participants’ time, giving them clear boundaries of the scope of each module. Evaluation teams are embedded within the programs and don’t need to spend time learning about programs’ context. Finally, the modular nature of this evaluation supports scalability. Program managers have flexibility on what is included and how much budget they are willing to dedicate to evaluations, ensuring that each module delivers adequate value for money.

Deploying modular evaluations in innovation programs

Modular evaluations are distinguished from ongoing monitoring or measurement. While optimizing the insights from monitoring systems, these are coupled with rigorous evaluative approaches and question how and why a particular outcome has been observed. To deploy modular evaluations, organizations require budget flexibility, an embrace of uncertainty about evaluation timing and focus, and a team that is open and supportive of real time learning.

At Caribou Digital, we’ve seen the value obtained from flexible innovation-supportive approaches and are excited to promote a method that works with and for technology-focused innovation programs. We continue to deploy modular evaluations in our work and collaborate with others who are similarly interested in ensuring that evaluations are candid, purposeful, and timely. If you are interested in this approach, please contact us at Elise Montano or Niamh Barry.

Traditional evaluation looks backward; innovation looks forward. was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Zero Trust Beyond Identity: A Holistic Approach to Cybersecurity

by Alejandro Leal The Zero Trust security model is designed to enhance cybersecurity by eliminating inherent trust within networks and requiring strict verification for every access request. However, Zero Trust is a multifaceted cybersecurity strategy that extends far beyond simple identity verification. While confirming user identity is essential, focusing solely on this aspect overlooks the dep

by Alejandro Leal

The Zero Trust security model is designed to enhance cybersecurity by eliminating inherent trust within networks and requiring strict verification for every access request. However, Zero Trust is a multifaceted cybersecurity strategy that extends far beyond simple identity verification. While confirming user identity is essential, focusing solely on this aspect overlooks the depth and breadth of what Zero Trust truly encompasses.

To strengthen their defenses against increasingly sophisticated cyber threats, organizations must adopt a comprehensive Zero Trust strategy that secures all facets of their digital environment. By integrating robust measures across identity, devices, networks, applications, and data, coupled with analytics and automation, organizations can achieve a more resilient and proactive cybersecurity posture.

Here’s how these fundamental aspects—users, devices, applications, networks, data, visibility, and automation—play a critical role in a Zero Trust strategy:

1. Users: Zero Trust security starts with stringent identity verification. Techniques like multi-factor and continuous authentication ensure that only authorized users access resources, minimizing insider threats and unauthorized access.

2. Devices: All devices must be secured and continuously monitored. This includes compliance checks, real-time device inspection, assessment, and patching to ensure that devices accessing the network are not compromising security.

3. Systems and Applications: Protecting systems and applications involves implementing advanced measures such as software risk management, application inventory, and continuous monitoring for vulnerabilities and anomalies.

4. Networks: Focusing on granular policy, real-time access decisions, and segmentation strategies such as micro-segmentation helps control access and prevent lateral movement within networks, a critical strategy to isolate and contain threats.

5. Data: Zero Trust necessitates rigorous data protection measures such as encryption and access controls to ensure data integrity and confidentiality, crucial for compliance and security. Organizations can also adopt more advanced techniques, such as data loss prevention and data monitoring and sensing.

6. Visibility and Analytics: Comprehensive monitoring across networks and systems helps detect anomalies and potential threats, providing the necessary insights to preemptively address security issues.

7. Automation and Orchestration: Streamlining responses to security events through automation and orchestration reduces response times and enhances security operations, making threat detection and mitigation more efficient.

For organizations implementing Zero Trust, it's essential to integrate these elements into a cohesive strategy that aligns with Zero Trust principles, adapting over time to meet the dynamic nature of cyber threats. This holistic approach not only enhances security but also supports operational efficiency and compliance across all organizational levels.

Join us in December in Frankfurt at our cyberevolution conference, where we will be discussing zero trust in more detail.

Take a look at some of the sessions on Zero Trust:

CISA Zero Trust Maturity Model PANEL: Zero Trust in Practice: Challenges and Success Stories Beyond the Now: Examining Emerging Trends in the Cybersecurity Landscape

Cloud Backup for AI Enabled Cyber Resilience

by Mike Small This Leadership Compass provides a roadmap for organizations navigating the evolving landscape of cloud backup and cyber resilience, highlighting how AI and machine learning are transforming data protection strategies. As society becomes more digitally dependent and cyber threats become increasingly sophisticated, the need for backup and resilience solutions has never been greater. T

by Mike Small

This Leadership Compass provides a roadmap for organizations navigating the evolving landscape of cloud backup and cyber resilience, highlighting how AI and machine learning are transforming data protection strategies. As society becomes more digitally dependent and cyber threats become increasingly sophisticated, the need for backup and resilience solutions has never been greater. This report not only identifies the top vendors in the market but also delves into the innovative technologies driving the next generation cyber resilience. It provides evaluations of leading solutions with an emphasis on regulatory compliance. This report is an essential guide for organizations looking to increase their cyber resilience.

PingTalk

Challenges in Preparing Ecommerce Channels for the Peak Season Rush

The ecommerce peak season rush is around the corner. Here's how to prepare to improve conversions, wow your customers, and keep fraudsters at bay.

Sunday, 29. September 2024

KuppingerCole

Leading the Cyber Charge: Insights from the CEO and CISO Office

Matthias invited KuppingerCole CEO Berthold Kerl and CISO Christopher Schütze to discuss the relationship between the CEO and the CISO in integrating cybersecurity into the company's business strategy. They highlight the key challenges faced by CEOs in integrating cybersecurity, the importance of communication between the CISO and the board, and the role of regulatory compliance. They also discuss

Matthias invited KuppingerCole CEO Berthold Kerl and CISO Christopher Schütze to discuss the relationship between the CEO and the CISO in integrating cybersecurity into the company's business strategy. They highlight the key challenges faced by CEOs in integrating cybersecurity, the importance of communication between the CISO and the board, and the role of regulatory compliance. They also discuss the need to balance cutting-edge cybersecurity solutions with cost considerations and the trends to look out for in the coming years, such as AI-driven security and supply chain security.




Spherical Cow Consulting

Operationalizing Trust Frameworks: Who’s Going to Keep the Lights On?

Given my recent posts on digital wallets and the future of academic identity federation, you might be able to tell I’m on a bit of a rant. These topics share a common thread: we have a lot of experience building trust frameworks but significantly less experience in operationalizing those trust frameworks and making them sustainable.… Continue reading Operationalizing Trust Frameworks: Who’s Going

Given my recent posts on digital wallets and the future of academic identity federation, you might be able to tell I’m on a bit of a rant. These topics share a common thread: we have a lot of experience building trust frameworks but significantly less experience in operationalizing those trust frameworks and making them sustainable.

What’s a Trust Framework?

Backing up a bit, let’s discuss a trust framework in this post’s context. According to NISTIR 8149, a trust framework is “the ‘rules’ underpinning federated identity management, typically consisting of system, legal, conformance, and recognition.” It’s about applying a whole set of technical and governance rules to the protocols, contracts, and regulations that let you use a digital identity from one organization to sign-in to another organizations services. A trust framework the backbone of how federated identity functions effectively—at least in theory.

For those who care about ensuring safe and effective interoperability, though, the rules defined in a trust framework are critical and everyone should apply them in their federations. Operationalizing trust frameworks means doing more than ‘just’ defining the policies and rules (as if that’s not hard enough). It also means creating the practical mechanisms and governance structures to make those rules measurable, enforceable, and part of daily operations. It means moving from theoretical planning to real-world execution, with a way to know when the frameworks are being correctly applied and when an entity is out of conformance.

The Funding Crisis Nobody Wants to Talk About

This is where the real problem lies. Who funds this infrastructure? A trust framework involves a ridiculous number of different organizations. They are all supported in various ways, and the identity federation part of their services is rarely the primary reason they exist. (University IdPs are not the reason that universities exist. Identity and access management services are not why publishers sell journal subscriptions.)

The federation operators themselves are often underfunded and overstretched. Out of all the global federations, only a handful have the resources to innovate. The rest? They’re in survival mode—keeping old systems running and sticking with SAML because it works well enough. They can’t afford to migrate. How can they require their federation members to pay to comply with a trust framework when the benefits are intangible to their core missions?

Lessons from the R&E Space: We Can’t Just Ignore the Underfunded Parts

The worlds of commerce and government, while buzzing about digital wallets and verifiable credentials, need to wake up to the realities that the R&E federations have lived with for decades. Trust isn’t just a tech problem; it’s a governance problem, a funding problem, a sustainability problem. Right now, too many organizations are excited about issuing credentials without thinking about how to manage them when things go wrong.

The Research and Education (R&E) federations have been there, done that, and frankly, are still wondering if it’s worth doing again. They’ve experienced the growing pains that come with scaling trust across borders and organizations, but they’re also exhausted—financially and operationally. It’s not that they don’t want to help; they can’t afford to.

R&E federations have the trust frameworks. They don’t have the resources necessary to operationalize those frameworks in a way that reaches all parties involved.

What’s the Future for Trust Frameworks?

So, where does that leave us? If we want federated identity to work sustainably across sectors and borders, we need to figure out the support model(s). We need a governance structure that doesn’t just sound good in theory but works in practice without requiring federations to burn themselves out.

And yes, some of this may come from government backing. However, we also need to think about models that work where government involvement isn’t the answer—where decentralized, community-driven approaches, like REFEDS SIRTFI for incident response, are more appropriate. We need to build bridges between these different types of frameworks and find a way for them to coexist, or else we’re just going to keep reinventing the wheel.

Ultimately, operationalizing trust frameworks is about more than technology or policies. It’s about ensuring that the people running the systems have the support they need, that the lights stay on, and that we don’t lose trust simply because we can’t afford to maintain it. The R&E sector has valuable lessons to offer, but without a more collaborative and well-funded approach, the rest of the identity world might find itself learning those same lessons the hard way.

A Call to Action Without All the Answers

I recognize that I’m shouting about a problem for which I don’t have an answer. But that’s exactly why I’m getting all rant-y about this. I hope we can collectively develop more effective ideas—better than the grassroots community efforts of the past—so that every organization involved finally recognizes the infrastructure underpinning our trust frameworks as critical. This effort isn’t just about keeping federated identity afloat; we must support, value, and encourage it to evolve, so it can deliver on its promise for the long haul.

The post Operationalizing Trust Frameworks: Who’s Going to Keep the Lights On? appeared first on Spherical Cow Consulting.

Thursday, 26. September 2024

KuppingerCole

How to Build a Modern Approach to Identity Governance in a SaaS first-World

In today's tech landscape, the shift towards distributed software environments and diverse access standards has transformed identity governance into a complex maze. Our upcoming webinar, "How to Build a Modern Approach to Identity Governance in a SaaS-First World", addresses the challenges and solutions for managing identities and access in cloud-based SaaS environments. Modern technology offers

In today's tech landscape, the shift towards distributed software environments and diverse access standards has transformed identity governance into a complex maze. Our upcoming webinar, "How to Build a Modern Approach to Identity Governance in a SaaS-First World", addresses the challenges and solutions for managing identities and access in cloud-based SaaS environments.

Modern technology offers innovative approaches to tackle these challenges. By leveraging advanced tools and methodologies, organizations can achieve complete visibility and control over SaaS applications. This ensures robust security, privacy, and compliance, while simplifying the management of user entitlements and access roles.

Warwick Ashford, Senior Analyst at KuppingerCole, will discuss the security, privacy, and compliance challenges associated with the growing use of cloud-based SaaS applications. He will explain why complete visibility and control of SaaS applications is essential and outline the key elements to achieving that goal.

Chaithanya Yambari, Co-Founder and CTO at Zluri, will share insights into modern identity governance strategies that provide real-time visibility and automated lifecycle management. He will cover how these strategies simplify audits, ensure compliance, and offer fine-grained access control within each application.




Elliptic

OFAC and FinCEN target major Russian money laundering services including Cryptex and PM2BTC

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against Cryptex–a crypto exchange registered in Saint Vincent and the Grenadines–due to its role in providing financial services to Russian cybercriminals, including receiving over $51.2 million in funds derived from ransomware attacks. OFAC has identified four cryptoasset addresses connected to this ex

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against Cryptex–a crypto exchange registered in Saint Vincent and the Grenadines–due to its role in providing financial services to Russian cybercriminals, including receiving over $51.2 million in funds derived from ransomware attacks. OFAC has identified four cryptoasset addresses connected to this exchange. Alongside OFAC’s action, FinCEN has issued an order designating PM2BTC–another crypto exchange associated with Russian illicit finance–as a “primary money laundering concern”. Sergey Sergeevich Ivanov, also sanctioned today, is associated with both entities.


liminal (was OWI)

How Market Monitor Helps Industry Leaders Stay Ahead with Timely, Actionable Insights

Staying ahead of the curve is no easy feat in today’s fast-paced digital landscape. Whether you’re in marketing, compliance, or product management, navigating the flood of information and identifying what truly matters is challenging. We’ve developed the Market Monitor, the latest feature of our Link platform, to help with just that challenge. By providing actionable […] The post How Market Moni
Staying ahead of the curve is no easy feat in today’s fast-paced digital landscape. Whether you’re in marketing, compliance, or product management, navigating the flood of information and identifying what truly matters is challenging. We’ve developed the Market Monitor, the latest feature of our Link platform, to help with just that challenge. By providing actionable insights and real-time competitive intelligence, Market Monitor helps professionals from diverse industries take proactive steps to outsmart competition, optimize performance, and make data-driven decisions. The Market Monitor filters the noise by leveraging our years of market research expertise to connect in-market events to themes and insights and then allowing personalized subscriptions to alerts.

To give you a glimpse into how Market Monitor can transform your day-to-day, let’s explore how different professionals leverage this powerful tool.

1. Alex – Marketing VP, Mid-Sized Vendor

“Market Monitor ensures that our campaigns target emerging fraud trends before the competition does.”

As a Marketing VP at a mid-sized cybersecurity vendor, Alex faces the challenge of keeping up with a constantly changing landscape of fraud prevention. With Market Monitor, Alex can quickly identify emerging trends that matter, from new threat vectors to competitor marketing shifts. Instead of sifting through irrelevant news, Market Monitor curates real-time insights that help Alex fine-tune campaigns, stay a step ahead of competitors, and measure the impact of her marketing efforts more effectively.

The Result: Alex’s team can quickly adapt strategies, driving higher engagement and revenue growth by focusing on what resonates most with customers.

2. David – CEO, Early-Stage Startup

“Market Monitor gives me the competitive edge to attract investors and launch our product with confidence.“

David, the CEO of an early-stage startup focused on AI-driven fraud detection, needs a clear understanding of the competitive landscape to secure funding. Market Monitor not only delivers insights into competitor strategies but also helps David identify potential investors who are showing interest in his sector. By using these insights to craft a compelling narrative, David can confidently approach investors and make data-backed decisions for his product’s go-to-market strategy.

The Result: David secures the resources and visibility needed to propel his startup toward success.

3. Maria – Product Manager, Cybersecurity Vendor

“With Market Monitor, I can build a product roadmap that’s aligned with real customer needs and market trends.“

As a Product Manager, Maria’s role demands a deep understanding of both customer pain points and competitor offerings. Market Monitor simplifies the process by delivering insights on customer sentiment, competitor product updates, and emerging technologies within the document verification space. Maria can now make more informed decisions about feature development and prioritize what resonates most with users.

The Result: Maria creates a product roadmap that drives adoption and innovation, ensuring her company stays ahead of the competition.

4. Ben – Biometrics Enthusiast & Career Seeker

“Market Monitor helps me stay updated on biometrics trends and explore new career opportunities.“

Ben is a biometrics professional who is eager to grow in the rapidly evolving field of cybersecurity. Market Monitor enables Ben to filter through the noise and access the most relevant news, industry insights, and even job opportunities specific to biometrics. With tailored updates, Ben can sharpen his expertise and confidently make his next career move.

The Result: Ben not only stays informed but also discovers new pathways for career growth in a field he’s passionate about.

5. Chris – Enterprise Risk Analyst

“I can evaluate fraud prevention solutions quickly and make data-driven recommendations with confidence.“

Chris, an Enterprise Risk Analyst at a large financial institution, is responsible for researching and recommending the best fraud prevention solutions. With Market Monitor’s real-time updates on new technologies, vendor comparisons, and industry best practices, Chris can streamline his research process. The tool’s ability to filter information by fraud type, deployment model, or budget allows Chris to make data-backed recommendations efficiently.

The Result: Chris minimizes fraud risk for his organization by staying ahead of emerging fraud technologies and selecting the right solutions.

Why Market Monitor Matters

In a world of overwhelming, disparate data, Link’s new Market Monitor ensures that you’re always one step ahead. Whether you’re adapting your marketing strategy, ensuring compliance, or navigating the competitive landscape, Market Monitor gives you the insights you need to act decisively. With tailored intelligence and real-time updates, professionals across industries use Market Monitor to make better, faster decisions and drive meaningful results for their organizations.

Are you ready to experience the power of Market Monitor for yourself? Try it now in Link and stay ahead of the market.

The post How Market Monitor Helps Industry Leaders Stay Ahead with Timely, Actionable Insights appeared first on Liminal.co.


IDnow

Time’s up: Urgent warning issued to all unlicensed gambling operators in Brazil.

The Brazilian Finance Ministry has set a new deadline for operators to apply for a license. Are you ready for January 1? Discussions regarding Brazil’s online gambling market have been ongoing since 2018, when the National Congress tasked the federal government with regulating the industry. Back then, it was hoped that new regulations would create […]
The Brazilian Finance Ministry has set a new deadline for operators to apply for a license. Are you ready for January 1?

Discussions regarding Brazil’s online gambling market have been ongoing since 2018, when the National Congress tasked the federal government with regulating the industry.

Back then, it was hoped that new regulations would create a safer and more transparent gambling environment, which would increase government revenue through taxation, protect players from fraud and promote responsible gambling practices. Many believed that such changes would also attract both local and international operators and transform Brazil into a major hub for the gambling industry in Latin America.

There have been numerous twists and turns since then, but in July 2024, Brazil’s regulatory process for online gambling, which included 10 detailed ordinances, was finally completed.

The new rules impose strict requirements on both local and international operators, including financial thresholds and rigorous background checks to ensure transparency and prevent money laundering. A mandatory fee structure for license applications was also established, ensuring that only financially stable and well-regulated companies could enter the market.

Read more about the other regulatory requirements, along with the multitude of challenges and opportunities facing gambling operators in Brazil in an interview with Ronaldo Kos, LATAM Gaming here.

Initially, companies had until August 2024 to apply for a license; those that were successful would be able to operate under the new bet.br domain from January 1, 2025; other gambling domains, including .com, would be blocked. It was announced that other licensing application windows would be announced in due time.

However, in mid-September, Brazil’s Finance Ministry announced that operators that had not applied for a license by midnight on September 30 would have to cease operations from October 1 until they had applied and received their permits. Any that continued to operate would be subject to having their websites blocked and fines of up to R$2 billion (US$354 million).

The two-week notice period was certainly a surprise for the industry. As gambling operators rush to get their applications submitted in time, it’s important they do things correctly and ensure they have the right technological stack in place, especially to comply with identity verification requirements.

Ronaldo Kos, LATAM Gaming at IDnow
All bets are off. Why the rush?

Ensuring all operators meet strict legal and ethical standards will strengthen the integrity of the sector and so is an admirable, yet sudden, step for the Ministry to take. Many, however, are left wondering the same thing: Why the change of dates? The cynical may think it is just to gather additional revenue as soon as possible, but according to Finance Minister, Fernando Haddad, this has nothing to do with it.

Many, including Fernando, believe Brazil is suffering from a ‘gambling epidemic,’ citing the nearly 25 million people who placed sports bets in the first seven months of 2024, averaging around 3.5 million new users per month.

Read more about the dangers of using grey market platforms in our blog, ‘Before Brazilian regulation: The dangers of gambling without KYC.’

Gambling has become a serious social problem and one the Finance Minister has vowed to crack down on.

It is for this reason, among many others, that underscore the importance, and appetite, of the Brazilian sports betting market becoming regulated as soon as possible. Further checks, linked to spending limits, are likely to be mandated in the coming months to protect the most vulnerable groups, including the elderly and those receiving state benefits.

How many Brazilian operators have applied for a license?

So far, 132 companies have submitted their applications to continue operations, and that number is constantly increasing as the deadline approaches.

All active operators who submit an application before the October 1 deadline will be allowed to continue operating, providing they are granted a license. Companies that applied before August 20 have been assured that they will receive a decision from the government regarding the status of their license application in time to be able to start operating from January 1. Those that applied after this date can continue to operate as usual until the end of the year but may not be able to continue ‘business as usual’ once regulated betting begins from January 1. Those that operate without a license run the risk of a fine and being banned for up to 10 years.

After the October 1 deadline, the government will process the pool of applicants and issue licenses. The government has until November 17 to give companies an answer and as of right now there has been no mention of a limit on the number of licenses being given. Once an operator has been approved, they then have 15 days to pay the fee of 30 million reais ($5.45 million) in order to continue doing business.

Playing by the rules: What are the guidelines going forward?

Once the official ordinances go into effect at the beginning of January, all authorized gambling operators must use the bet.br domain in order to be distinguished from unauthorized platforms, including .com.

Credit cards will no longer be a valid form of payment and instead only debit cards or Pix (the Brazilian instant payment system ecosystem) will be accepted.

Operators will also be required to perform a robust identification process during onboarding. It is therefore of utmost importance to choose an identity verification provider that can meet the very specific challenges of the Brazil market:

Be able to process a wide range of different Brazilian identity documents. Perform KYC and gather customer information, such as full name, date of birth and valid identification documents. Verify citizens’ CPF numbers and ensure that they do not appear on any of the various government-mandated restriction lists. Facial biometric verification is now mandatory for registration, with regular facial re-verifications a requirement.

Operators must also monitor transactions for suspicious activity to prevent money laundering, fraud and underage gambling. The goal of these KYC measures is to enhance consumer protection, promote responsible gambling and ensure compliance with Brazil’s broader efforts to create a transparent and secure gambling environment. Non-compliance with these requirements can result in severe penalties, including the loss of the operator’s license.

Operators will therefore need to ensure their platform can onboard customers quickly, safely and securely. A robust KYC process guarantees that customers are who they say they are, while preventing fraud and protecting the business and the customer at the same time.

Ronaldo Kos, LATAM Gaming at IDnow
Play on with IDnow.

At IDnow, we offer advanced Know Your Customer (KYC) services to assist with existing and upcoming regulations, in Brazil and beyond.

Our automated and secure identity verification solutions enable operators to quickly and accurately verify player identities, ensuring compliance with Brazil’s stringent KYC requirements.

We support real-time document verification, biometric checks and fraud detection, which helps operators prevent money laundering and underage gambling while maintaining a seamless user experience.

Read more about our Brazil-ready identity verification services.

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


Ocean Protocol

DF108 Completes and DF109 Launches

Predictoor DF108 rewards available. DF109 runs Sept 26 — Oct 3, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 108 (DF108) has completed. DF109 is live today, Sept 26. It concludes on October 3. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE r
Predictoor DF108 rewards available. DF109 runs Sept 26 — Oct 3, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 108 (DF108) has completed.

DF109 is live today, Sept 26. It concludes on October 3. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF109 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF109

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF108 Completes and DF109 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


DHIWay

Digital Public Infrastructure is the linchpin of trustworthy data exchange

The honourable Finance Minister presented a much-anticipated budget for FY 24-25, and those who are familiar with Digital Public Infrastructure (DPI) will find some topics interesting. The budget mentioned DPI to support farmers by improving access to digital tools. An extension to that is data digitization and reusability through the Unique Land Parcel Identification Number […] The post Digital

The honourable Finance Minister presented a much-anticipated budget for FY 24-25, and those who are familiar with Digital Public Infrastructure (DPI) will find some topics interesting. The budget mentioned DPI to support farmers by improving access to digital tools. An extension to that is data digitization and reusability through the Unique Land Parcel Identification Number (ULPIN) or “Bhu-Aadhaar” and the digitization of land records in urban areas with contextual GIS data.

Bringing DPI into a discussion around data digitisation, enabling reusable data exchange and creating digital identifiers opens the IT architecture to the concept of “trust layers”. Today, as many citizen services are transitioning from traditional Web 2.0 models to Web 3.0 design patterns, trustworthy data exchange is critical to reducing transaction friction. The “trust” component of data exchange is enabled through distributed ledger technologies (DLTs) such as blockchain. As new service deployments bring about digital identifiers, digital cards, and records, and, more importantly, there is a need to have authentic data sources to query, it is vital to view DPIs from the perspective of Open Trust Infrastructure.

Public instances of a blockchain, such as CORD, form the foundation of the Open Trust Infrastructure. Combining open standards, networks, and protocols with open innovation makes the available building blocks critical for designing applications and services around them. Trustworthy data exchange is enabled through the issuance and acceptance networks of verifiable credentials (VCs) and verifiable data streams, allowing purpose-specific data sharing and verification with a notice/consent design embedded within.

The trust layer of the blockchain will bring about open infrastructure such as data fingerprinting registries, credentialing registries, and key management services. Data registries on the blockchain allow regulatory authorities to openly govern data while ensuring it is accessible to services with high fidelity and quality. Among the nine priority areas highlighted in the Budget 2024 speech, each can have a multiplier impact when DPIs with data registries are enabled for the services and innovative solutions can be incubated around such infrastructure.

Verifiable Credentials are increasingly becoming mainstream – sometimes without the end consumer realizing that the records they manage are secure, tamper-resistant VCs. This is a positive sign that the end-consumer experience can be transformed from a paper-based system of low-trust and low-fidelity to a high-trust and high-fidelity digitally secure method. With this, it is not surprising that organizations and educational institutions are today using credentialing platforms to issue VCs for learning and educational records (LERs), Skilling and Knowledge Records (SKRs) and Workplace Credentials (WPCs). Secure, verifiable credentialing is a natural outcome of an Open Trust Infrastructure.

With DPIs and Open Trust Infrastructure, it is crucial to view the multiple views and access to data services from the perspective of digital trust ecosystems. This, in turn, ensures better governance and operational management of such infrastructure through consortium-like approaches, which provide participants with necessary incentives to maintain and improve these digital rails.

As ISVs and SIs come together to implement the critical parts of the digital infrastructure required to deliver the records such as skilling and knowledge credentials or the Kisan Credit Cards – with the DPI design model, it will be possible to use the CORD blockchain to rapidly prototype and deliver production-ready software that is capable of being used at nation-scale. The CORD Blockchain also allows for a decentralized and federated approach to data governance, data management and data pipelines which addresses the topic of legacy data silos and interoperability.

CORD Blockchain sandboxes allow ISVs to design and deploy applications which follow the guidelines of Digital Public Goods (DPGs) on infrastructure that is aligned with the principles of DPI. The growing number of use cases, an uptick in the number of tenders and RFPs that mention blockchain as the preferred underlying infrastructure, and a better appreciation of the concept of reusable digital identifiers have contributed to the rapid adoption of Web 3.0 models. Together with empowering the data principals with more fine-grained control over the data, such an approach also brings in newer ideas in economic models around sustainable and scalable infrastructure.

The post Digital Public Infrastructure is the linchpin of trustworthy data exchange appeared first on Dhiway.


Okta

Propel Your SaaS Apps Into the Future at Oktane

We’ve been discussing and reflecting on the Future of Identity over the last couple of months. It’s apparent to us that Identity is rapidly growing in its complexity. The surface area that our customers need to protect is growing, like a sunrise revealing a hidden terrain in the morning twilight. We realize that in a short time, the growing demands of customers will start to influence the roadmaps

We’ve been discussing and reflecting on the Future of Identity over the last couple of months. It’s apparent to us that Identity is rapidly growing in its complexity. The surface area that our customers need to protect is growing, like a sunrise revealing a hidden terrain in the morning twilight. We realize that in a short time, the growing demands of customers will start to influence the roadmaps of SaaS companies and their developers to keep pace with protecting their customers and differentiating their value in their respective markets. The timing of this discussion couldn’t be better! We would love to meet you, hear about your vision and challenges, and nerd out on Identity and Software Development. Join us at Caesars Forum in Las Vegas, NV, on October 15-17, 2024, for Oktane, the biggest identity event of the year, and learn how to propel your SaaS apps into the future by connecting with Okta!

If you are currently not attending Oktane (but would like to) and you build SaaS apps, please reach out to us at wic-dev-advocacy[at]okta.com to request information regarding how you can obtain a pass. Limited number of passes available so reach out soon!

We planned fantastic events to help you take your SaaS apps to the next level by leveraging Okta’s identity and user lifecycle platforms. Find us and let’s chat at these activities:

Breakout sessions

Building a SaaS Application with CIC
Wednesday Oct 16, 3:45 PM

Empower your Ecosystem with Okta
Thursday Oct 17, 12:45 PM

B2B SaaS App of the Future
Thursday Oct 17, 2:30 PM

Stop by the Oktane Dev Hub

Take a drive through the Oktane Expo Hall, and you’ll find the SaaS B2B experience in the Dev Hub at the intersection of SaaS Way and Integration Drive. Here, you’ll discover the ways Okta can help you create secure B2B SaaS applications. You’ll learn about identity and user lifecycle management best practices. Then build these standards-compliant solutions into your apps so you can submit them to the Okta Integration Network (OIN) and watch your customer base grow!

Check out the Oktane Hands-on Labs for interactive learning opportunities

Roll up your sleeves and get your coding on. This is your chance to get techy and build code using Okta solutions. Find us at labs where you can pick from options such as:

Scaling Okta App Management by Importing Data from PowerShell into Terraform
Streamline and scale your Okta app management by using PowerShell to export configurations and use Terraform to automate environment transitions.

Universal Logout: Instantly Sign a User Out across All Your Apps
Learn how to lock down all your apps and protect your customers completely at the first sign of trouble with one API!

Identify Inactive Okta Users with Okta Workflows
Determine if you have unused accounts that might have been missed by some manual deprovisioning process.

Labs are first-come-first-serve and have limited capacity. If there’s a lab you’re interested in, be sure to show up on time!

B2B SaaS builders happy hour

Join us at a special event for those who build apps that connect with Okta for their customer base! This happy hour is where all the fun and connections happen. This is a private event exclusively for techy folk who build multi-tenant B2B SaaS applications and want to offer Okta Identity Provider connections as an option for their customers. Does this describe you and are you interested in attending this event? Please contact us at wic-dev-advocacy[at]okta.com to be added to our guest list. You must be an attendee at Oktane to attend this happy hour.

Okta Workflows community meetup

Join the Okta Workflows community meetup during Oktane 2024 in Las Vegas. Meet Workflows community members, colleagues, and friends over drinks and delicious appetizers.

Find resources, solutions, and networking opportunities at Oktane

We’re excited to connect with you and learn about your application’s needs! Please find us at Oktane, and feel free to comment if you have any questions or requests in the meantime.

Remember to follow us on Twitter and subscribe to our YouTube channel for exciting content.


DHIWay

My Journey with Sunbird RC: Revolutionizing Registries and Credentials

My journey with Sunbird began in 2017 when we at the EkStep Foundation joined forces with the Ministry of Education (formerly MHRD) on the National Teacher Portal initiative. This initiative soon evolved into DIKSHA (Digital Infrastructure for Knowledge Sharing), a national platform dedicated to enhancing school education in India. The Vision Behind Sunbird Sunbird was […] The post My Journey wi

My journey with Sunbird began in 2017 when we at the EkStep Foundation joined forces with the Ministry of Education (formerly MHRD) on the National Teacher Portal initiative. This initiative soon evolved into DIKSHA (Digital Infrastructure for Knowledge Sharing), a national platform dedicated to enhancing school education in India.

The Vision Behind Sunbird

Sunbird was conceived as a collection of modular, configurable, and extendable building blocks designed to “share the ability to solve.” Think of it as a set of LEGO blocks or puzzle pieces that can be assembled in myriad combinations to foster innovative solutions within the educational ecosystem.

From the outset, we identified several essential building blocks that would form the backbone of any large-scale transformation. Among these were telemetry, knowledge graphs, data platforms, and crucially, registries and credentials. Our initial work on registries and credentials began as OpenSABER (Open Software Architecture for Building Electronic Registries), utilizing openbadges as a foundational element.

At the heart of it, the idea has been and remains, to enable attested sources of information with ownership for individual’s data in their control and credentialing (or badging) to make this data easily shareable and verifiable.

Registries form the seed of trust in any decentralized ecosystem and credentials empower an individual with control over their data, making it portable, verifiable, inclusive, trustworthy and accessible.

Empowering Individuals Through Data Ownership

At its core, Sunbird RC enables an attested source of information while granting individuals ownership and control over their data. This credentialing process—referred to as badging in the initial days—ensures that personal data is easily shareable and verifiable.

Registries serve as the seeds of trust in any decentralized ecosystem, while credentials empower individuals by giving them control over their data. This makes their information portable, verifiable, inclusive, trustworthy, and accessible to all.

Through the pandemic, as the eGov Foundation developed DIVOC for COVID vaccination management & certification, the global efforts rapidly evolved standards around ‘verifiable credentials’. OpenSABER was re-branded as Sunbird RC (Registry and Credential).

Adapting to Change: The Impact of the Pandemic and Birth of Sunbird RC

The COVID-19 pandemic accelerated the development of digital solutions, prompting the eGov Foundation to create DIVOC for managing vaccination records and certifications. During this period, global standards around ‘verifiable credentials’ rapidly evolved. As a result, OpenSABER was rebranded as Sunbird RC (Registry and Credential) to give a fresh boost of enthusiasm

Sunbird RC has emerged as a vital building block for establishing trusted registries and verifiable credentials across various domains. It powers other Digital Public Goods (DPGs) such as DIGIT, Inji, Sunbird Serve, and Sunbird ED. To date, it has facilitated billions of credentials and diverse registries across sectors like healthcare and education in India.

Sunbird RC offers microservices for credential issuance and management, enabling rapid deployment of electronic registries through configurable schemas. One standout feature is its ability to generate instantly verifiable credentials that can be accessed offline via printable QR codes.

Looking Ahead: Collaboration and Innovation

With the growing traction for credentialing systems globally across various use cases, Dhiway has joined Sunbird RC as a co-maintainer alongside the Centre for Open Societal Systems (COSS). Together, we aim to advance this project further by promoting adoption and enhancing innovations in digital identity, data registries, digital wallets, and credentialing systems.

Sunbird is an open-source collective seeded by the EkStep Foundation. The community has developed about 20 digital solutions—referred to as “building blocks”—that can be utilized individually or combined to create larger and more complex solutions.

I have had the pleasure of working with multiple community members from eGov Foundation, MOSIP, BMGF, some of the hyperscale cloud service providers such as AWS & Google, a few large scale multinational IT companies, a bunch of start-ups, government ministries, and many open-source communities.

I am thrilled to continue my tryst with Sunbird RC and grow it together with the community, leveraging Dhiway team’s extensive experience in building and maintaining open-source projects.

To learn more about Sunbird RC and join our vibrant community, visit https://rc.sunbird.org 

The post My Journey with Sunbird RC: Revolutionizing Registries and Credentials appeared first on Dhiway.


TBD

Preptember: Amping Up for Hacktoberfest 2024

TBD is participating in Hacktoberfest!

With October blazing through, we're greeted by pumpkin spices, the aroma of fall leaves drifting in the rain, and of course, the much-anticipated Hacktoberfest. Whether you're a seasoned contributor or new to open source, there's something for everyone.

🎉 We're Participating in Hacktoberfest 2024!

We have several projects with a variety of issues that we'd love your contributions for! For each issue that's merged, you'll earn points towards the TBD Hacktoberfest Leaderboard. Winners will receive exclusive TBD Hacktoberfest 2024 swag!

We're kicking off Hacktoberfest with two events:

September 27: tbdTV - Hacktoberfest October 2: Show & Tell: TBD Hacktoberfest

Be sure to add them to your calendar.

📌 What is Hacktoberfest?

Hacktoberfest is a month-long (October) celebration of open source software. It's sponsored by DigitalOcean, GitHub, and other partners. Check out Hacktoberfest's official site for more details and to register. Registration is from September 23 - October 31.

📂 Dive into TBD's Participating Projects

We included a wide variety of projects and issues for Hacktoberfest 2024. Each of our participating repos has a Hacktoberfest Project Hub, which contains all issues you can pick up with the hacktoberfest label. For easy reference, repos with multiple projects will have multiple project hubs.

Explore our participating repos below and see where you can make an impact:

developer.tbd.website

Languages: MDX, JavaScript, CSS, Markdown Project Description: Docusaurus instance powering the TBD Developer Website (this site). Links: Hacktoberfest Project Hub | Contributing Guide

web5-js

Language: TypeScript Description: The monorepo for the Web5 JS TypeScript implementation. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub: Protocol Explorer | Hacktoberfest Project Hub: General | Contributing Guide

web5-rs

Language: Rust Description: This monorepo houses the core components of the Web5 platform containing the core Rust code with Kotlin bindings. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub | Contributing Guide

dwn-sdk-js

Language: TypeScript Description: Decentralized Web Node (DWN) Reference implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DWA Starter

Language: JavaScript Description: Decentralized Web App (DWA) starter collection. Links: Hacktoberfest Project Hub: VanillaJS | Hacktoberfest Project Hub: Vue | Contributing Guide

DIDPay

Languages: Dart Description: Mobile app that provides a way for individuals to interact with PFIs via tbDEX. Links: Hacktoberfest Project Hub | Contributing Guide

DID DHT

Language: Go Description: The did:dht method and server implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DCX

Languages: TypeScript, JavaScript Description: A Web5 Protocol for Decentralized Credential Exchange. Links: Hacktoberfest Project Hub | Contributing Guide

Goose Plugins

Language: Python Description: Plugins for Goose, an AI developer agent that operates from your command line. Links: Hacktoberfest Project Hub | Contributing Guide

Fllw, Aliased

Languages: TypeScript, JavaScript Description: A reference app for building Decentralized Web Apps. Links: Hacktoberfest Task: Fllw | Hacktoberfest Task: Aliased Hot Tip

Not a coder? No worries! developer.tbd.website has tons of non-code related issues up for grabs.

📝 Guide to TBD x Hacktoberfest 2024

✅ Topic Check: Contribute to projects that have the hacktoberfest label. This ensures your PR counts towards the official Hacktoberfest prizes.

🏷️ Label Insights:

Start with an issue labeled hacktoberfest and comment ".take" to assign yourself the issue. After submitting a PR and having it approved, the PR will be labeled hacktoberfest-accepted and you'll receive points on our leaderboard and credit towards the global Hacktoberfest 🎉 If your PR is marked with a spam or invalid label, re-evaluate your contribution to make it count.

🥇 Code and Conduct: Adhere to our code of conduct and ensure your PR aligns with the repository's goals.

🫶 Community Support: Engage with fellow contributors on our Discord for tips for success from participants!

🆘 Seek Help: If in doubt, don't stress! Connect with the maintainers by commenting on the issue or chat with them directly in the #🎃┃hacktoberfest channel on Discord.

🎁 Leaderboard, Prizes and Excitement

Be among the top 10 with the most points to snag custom swag with this year's exclusive TBD x Hacktoberfest 2024 design! To earn your place in the leaderboard, we have created a points system that is explained below. As you have issues merged, you will automatically be granted points.

💯 Point System WeightPoints AwardedDescription🐭 Small5 pointsFor smaller issues that take limited time to complete and/or don't require any product knowledge.🐰 Medium10 pointsFor average issues that take additional time to complete and/or require some product knowledge.🐂 Large15 pointsFor meaty issues that take a significant amount of time to complete and/or possibly require deep product knowledge. 🏆 Prizes The top 10 contributors with the most points will be awarded TBD x Hacktoberfest 2024 swag from our TBD shop. The top 3 contributors in our top 10 will be awarded very limited customized TBD x Hacktoberfest 2024 swag with your github username on it. Stay tuned to our Discord for the reveal!

Keep an eye on your progress via our Leaderboard.

🎙️ Livestreams & Office Hours

Dive into our jam-packed Hacktoberfest schedule! Whether you're just here for fun or are focused on learning everything you can, we've got you covered:

September 27th, tbdTV Hacktoberfest Kickoff - Tune in for a special stream with Rizel Scarlett and Tania Chakraborty to learn how to boost your career through open source contributions.

October 2nd, Show & Tell: Hacktoberfest 2024 - Explore all our projects, what types of contributions you can make and more with Tania Chakraborty and Rizel Scarlett.

Every Tuesday, Community Office Hours - Join us every Tuesday at 1p ET for the month of October, where we will go over PR reviews, live Q&A, and more. This event occurs on Discord.

Live Events Calendar - Keep tabs on our Discord or developer.tbd.website for our future events & sneak peeks - we're always cooking up something new!

📚 Resources for First-Time Contributors 📖 How to Contribute on GitHub 🛠 Git Cheatsheet 🔍 Projects Participating in Hacktoberfest

Happy hacking and cheers to Hacktoberfest 2024! 🎉

Wednesday, 25. September 2024

Indicio

Blockchain Digital Badges – Building Blocks for Digital Learning Ecosystems

Channel Partners The post Blockchain Digital Badges – Building Blocks for Digital Learning Ecosystems appeared first on Indicio.

Spruce Systems

The Importance of Protecting Digital ID Users from “Phone Home” Surveillance

Keeping Digital Identity Safe with Private Information Retrieval.

Digital identity systems theoretically offer substantial improvements over the current identity status quo, including superior fraud prevention and enhanced user privacy. As the industry comes together around standards and system designs, we at SpruceID firmly believe user privacy must remain front and center.

One of the more challenging aspects of building digital identity is protecting users from surveillance. It’s always been possible to track Individuals through their movement and activities, in the real world and online. It is now quite common for digital data to be used to form profiles of Web users, and a poorly designed digital identity system risks replicating that pattern. This surveillance could happen by any number of legitimate (or not) entities, including commercial “data harvesters” or by ID issuers themselves, such as the Department of Motor Vehicles. 

Below, we outline one effort to combat the risk of surveillance through digital identity systems, using a process known as Private Information Retrieval, or PIR. By using cryptography to obscure remote data queries, PIR can reduce identity-based surveillance and enhance user trust in digital identity.

When A Question Reveals Too Much

One strength of existing physical IDs, such as driver’s licenses, is their natural protection against surveillance. In most cases, someone checking the ID looks at it and verifies its authenticity and resemblance to the holder, and that’s the extent of information capture. There’s no call out to a separate system to verify the legitimacy of that ID, no records kept that you may be showing it quite frequently to a clerk at your local store, and no concerns raised about whether you’re buying a pint or a pint of Ben & Jerry’s. 

This sort of protection is more challenging in a digital system when there is an inherent tendency in technology to generate a robust event log for every transaction. A digital ID system with minimal privacy controls might query a central server for verification whenever your ID is checked and - accidentally or on purpose - create a detailed, real-time feed of your online and real-world activities. That data could have great value to the issuing authority and numerous bad actors, who will no doubt attempt to access that treasure trove of personal information. 

The implications of abuse of a data set containing granular verified behavior of individuals is sobering. Governments could use it to surveil activists and journalists. Abuse and stalking victims could be tracked by their abusers. Even challengers in democratic elections could find themselves targeted by unethical incumbents abusing the system from within. One worrying example may have unfolded recently in China when a local government allegedly used data from a COVID app to lock down protestors worried about frozen bank assets.

This is what’s known as a “phone home” problem in cybersecurity. Current standards for digital ID reduce this risk by storing an issuer’s digital signature on a mobile device, where it can be verified locally rather than needing to query a server. This works much the same way as a hologram on a physical driver’s license, allowing it to be verified locally without generating a digital trail.

But there are still circumstances where remote identity queries are necessary. This creates a design problem for a privacy-preserving digital ID system: how do you query a database without the database being able to record the query?

The good news is that thanks to innovations in cryptography, it’s very feasible to ensure that digital identity systems don’t risk exposing users to surveillance, even when a verifier has to “phone home. " 

Building Private Information Retrieval

A privacy-preserving database query needs to mask many kinds of information: the identity of the querier, the identity of the target of the query, what data is being checked, and the location of the query, for a start. At the same time, the data still needs to be restricted to a specific credential holder. 

This is possible thanks to a process called “Private Information Retrieval,” or PIR. The nuances of PIR can be illustrated by a few hypothetical approaches to obscuring data retrieval. For instance, if a database query downloads an entire database, the server won’t know which specific record the query was after. Another brute-force approach involves keeping many separate copies of a database that can be queried at random, making it hard for any one copy’s controller to aggregate a full picture of any set of queries.

These aren’t very practical solutions, though. We believe there’s much more promise in a relatively recent addition to the PIR toolbox: zero-knowledge proofs, or ZKPs. Using cryptographic encoding, ZKPs transform data, such as an ID holder’s identity, so the data can be confirmed without being revealed.

ZKPs can serve several roles in protecting user privacy during a digital ID database query. First, a package of ZKP-protected data can affirm that a verifier, such as a law enforcement officer, has a right to query an identity database without revealing the verifier’s specific identity. The verifier would then submit the credential that must be verified, again protected by ZKP encoding. This encoded credential could then be checked for validity without revealing the credential holder’s private information. 

This would make it far less possible to keep a record of useful information that could be used for surveillance—“someone from a trusted entity queried some sort of information from some database at some time”—which doesn’t really allow for Sherlock-level sleuthing. It’s this inability to even generate information that could be aggregated for exploitation that makes ZKP so enticing. 

Communicating the Intent of Privacy

71% of Americans now express concern about government use of data. At SpruceID, we expect that holding and demonstrating strong privacy principles will be key to unlocking the acceptance and broad adoption of digital identities. Concepts like Private Information Retrieval should be a standard for any digital identity system, and that ZKP tech is a promising tool in that effort.

Industry practitioners should take lessons from the past 50 years of software development and build personal security and privacy into systems from the start. That’s not to say this work will be simple and seamless. Of course, it will undeniably be challenging not only to design and implement truly privacy-preserving digital identity systems but also to convince a skeptical public. Both will be necessary, though, to foster broad user adoption and make the full promise of digital identity a reality.

Visit our website to learn more about SpruceID's stance on privacy and how we protect digital ID users from phone home surveillance.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


1Kosmos BlockID

Vlog: How 1Kosmos Can Be An External Authentication Method When Using Microsoft Entra ID?

Discover how 1Kosmos enhances Microsoft Entra ID with seamless identity-based authentication and passwordless access. Learn about new external authentication methods and how they empower organizations to protect critical assets, implement Conditional Access policies, and offer users more flexibility and security. Robert MacDonald: Hi everybody. Welcome to our blog. My name’s Rob MacDonald. I’m the

Discover how 1Kosmos enhances Microsoft Entra ID with seamless identity-based authentication and passwordless access. Learn about new external authentication methods and how they empower organizations to protect critical assets, implement Conditional Access policies, and offer users more flexibility and security.

Robert MacDonald:

Hi everybody. Welcome to our blog. My name’s Rob MacDonald. I’m the VP of product marketing here at 1Kosmos, and I’m joined by Vik today.

Vik, how you doing?

Vikram Subramanian:

I am doing great, yeah.

Robert MacDonald:

Awesome.

Vikram Subramanian:

And just for everyone, Vikram Subramanian. I run solutions for 1Kosmos.

Robert MacDonald:

Awesome. And you do a great job at it, Vik, by the way. Appreciate having you.

All right, Vik, listen. Today, I wanted to just have a short little vlog with you about Microsoft. Microsoft has released a new authentication, external authentication method, into its Entra ID platform. And the feature will allow more customers to expand their use of 1Kosmos’s identity-based authentication and passwordless access capabilities, and far more Microsoft and environments while maintaining all of the Conditional Access policies that they’ve built. 1Kosmos as an external authentication method will allow organizations to seamlessly protect Microsoft resources while also still protecting those platforms that fall outside of the Microsoft coverage.

All that to say, Vik, there are many use cases we can help fulfill to help improve an Entra ID investment. 1Kosmos as an external authentication method, while that is a new feature to Entra ID, what is it?

Vikram Subramanian:

Good question. Many organizations obviously have invested in Entra, and I’m glad that both of us are actually getting that right. It’s not Ontra. It’s Entra.

Robert MacDonald:

It’s not Ontra. It’s Entra.

Vikram Subramanian:

The main use case was that, hey, given that organizations are already invested in Entra, people are already authenticating in Entra and they’re probably using authenticators that are not necessarily complying to requirements that the enterprise has, or not necessarily having the experience that the enterprise has. It is a pretty big change if we tell people to actually move all of their authentication and utilize 1Kosmos as an IDP.

A great use case over here is, and we’ve always been asked by clients is, can we used 1Kosmos as an MFA within our Entra ecosystem? And now, with external authentication methods, we can. And what this provides is the ability for the enterprise to go ahead and introduce 1Kosmos to their end users and slowly start migrating them towards utilizing passwordless in its entirety.

Robert MacDonald:

Interesting, okay. That’s a lead into my next question, which is, with EAM, or external authentication methods, within Entra ID, how can one cause most help organizations within that kind of use case?

Vikram Subramanian:

A great use case is where organizations want to protect their crown jewels, so privileged assets, restricted assets, or restricted applications, restricted transactions. Anytime anything that requires an MFA, you can put 1Kosmos as the authenticator of choice within your Conditional Access policies. Earlier, you were not able to do this. It’s a great feature introduced by Microsoft, and we have immediately jumped and integrated with them utilizing that feature, which means now within the Conditional Access policy, you can select 1Kosmos as the authenticator for when certain conditions are met. And you can also specify what kind of authentication do you want the user to do. Do you want to depend on device biometrics or the superior Live ID that we offer?

Robert MacDonald:

Fair enough. With this change to the way in which Microsoft’s offering their Entra ID solution, why is that important not only to organizations, but maybe the industry at large?

Vikram Subramanian:

See, now this is the increase or introduction of choice. Earlier, within the Microsoft ecosystem, the choices were very limited in terms of who were the authenticators you could use for doing MFA. And now, with the open ecosystem that has now been introduced, 1Kosmos can also be utilized by organizations. There are many organizations, many of our clients who are already investing in Entra, or have invested in Entra, could not leverage 1Kosmos without really making a huge organizational change and were stuck in their implementation. Now, this frees them up. They can utilize 1Kosmos as an MFA solution or as a passwordless solution. And it gives them choice.

And they can also utilize Conditional Access policies. That is very important. Why is Conditional Access policies important? Because everyone has it. Everyone is going to be using it. And now, you can also utilize that for Live ID.

Robert MacDonald:

Awesome. That’s amazing.

Vik, I appreciate you swinging by today and going through this quick use case with us on our vlog. I look forward to talking to you on our next one.

Vikram Subramanian:

Absolutely.

The post Vlog: How 1Kosmos Can Be An External Authentication Method When Using Microsoft Entra ID? appeared first on 1Kosmos.


IDnow

Paperless signing: Discovering the advantages of digital signatures.

IDnow ebook reveals how the latest digital signature solutions can help unlock valuable business opportunities. From symbols and pictographs, to ink and pen, signatures have been around for thousands of years, dating all the way back to 3000 B.C. Interestingly, the idea of digital signatures can be traced to the Wild West era when businesses […]
IDnow ebook reveals how the latest digital signature solutions can help unlock valuable business opportunities.

From symbols and pictographs, to ink and pen, signatures have been around for thousands of years, dating all the way back to 3000 B.C. Interestingly, the idea of digital signatures can be traced to the Wild West era when businesses used the dotted communications of Morse code and telegrams to sign contracts.

Fast forward to the 20th century, when in 1976 the first concept of a digital signature was introduced by cryptographers, Whitfield Diffie and Martin Hellman. Today, compliant and fully digital signatures are integral to everyday business operations, securely facilitating contract signings worldwide, instantly accessible from any location, at any time.

As the world shifts from physical to electronic, digital signatures will be essential in ensuring trust and authenticity in transactions. Digital signatures can streamline the process of signing contracts and enhance a company’s ability to verify, comply and protect digital identities, fortifying the trust needed to thrive in an increasingly digital world.

Click below to check out our latest ebook, ‘Expert guide to digital signatures.’

Expert guide to digital signatures. What are digital signatures and the history behind them? Download to discover: The different types of digital signatures Benefits of implementing a digital signature solution How IDnow can help unlock valuable business opportunities Read now Navigating the different types of digital signatures.

Much like a traditional wet ink signature, a digital signature serves as legal evidence when concluding a transaction. However, the advantages of digital signatures extend far beyond their physical counterparts. They are faster, entirely digital, and can be signed remotely from anywhere at any time, eliminating the need for signers to visit a physical location or branch.

This convenience not only streamlines the process but also enhances efficiency, making it easier and quicker to finalize important documents without the logistical challenges associated with physical signatures. Different types of digital signatures include:

Simple Electronic Signatures (SES), which can be as straightforward as typing a name or clicking an “I agree” button. They are versatile and accessible, making them ideal for everyday transactions. However, unlike Advanced Electronic Signatures and Qualified Electronic Signatures, an SES lacks strong authentication and data integrity measures. It is best suited for low-risk, informal agreements and may not be appropriate for high-value or legally sensitive transactions. Advanced Electronic Signatures (AES) uniquely identify and link the signer to the signature, ensuring it can be attributed to them. They meet legal requirements for court admissibility and comply with regulations like the EU’s eIDAS. While not as robust as a QES, an AES still offers significant legal weight and is recognized as a secure and reliable electronic signature in many jurisdictions. Qualified Electronic Signatures (QES) are highly secure digital signatures backed by a qualified certificate from a trusted Certificate Authority. This certificate meets stringent regulatory standards and links the signature to the signer’s verified identity, ensuring the highest level of legal recognition and security. Embracing digital signatures.

The use of digital signatures has risen by 50% since the COVID pandemic, according to airSlate, with 69% of respondents continuing to use digital signatures due to their increased convenience and security.

This digital signing boom shows no signs of slowing, as the global digital economy continues to expand, with people increasingly living, working and transacting across borders, industries and use cases in this remote, digital landscape. Digital signatures, which combine remote, 24/7 convenience without compromising on security, are purpose-built for the digital world.

The increasing importance of digital signatures was put into the spotlight in July 2016, when the European Union (EU) issued the eIDAS regulation (electronic Identification, Authentication, and Trust Services), which increased the significance of electronic signatures drastically. Digital signatures are therefore widely used in the EU for various purposes, primarily to ensure the authenticity, integrity and non-repudiation of electronic documents and transactions.

For example, digital signatures are necessary and extremely important in the following situations:

Legal contracts and agreements: Digital signatures are used to sign legally binding contracts and agreements, including sales contracts, employment agreements and service contracts. They provide assurance that the signer has accepted the terms of the document. Financial transactions: In the EU, digital signatures play a crucial role in financial transactions, including online banking, electronic fund transfers and digital payments. They help verify the identity of the parties involved and ensure the security of the transaction. Regulatory compliance: Digital signatures are often required to comply with various EU regulations and directives, such as the eIDAS regulation which establishes a legal framework for electronic signatures, seals and time stamps. The importance of the eIDAS Regulation.

The eIDAS Regulation (EU No 910/2014) stands as a cornerstone in Europe’s digital landscape, harmonizing rules for electronic identification and trust services across the European Single Market. It sets stringent standards for electronic signatures, notably QES, ensuring they carry equivalent legal weight to traditional handwritten signatures.

Crucially, eIDAS mandates mutual recognition of electronic identification methods between Member States, facilitating seamless cross-border transactions. This regulatory framework not only enhances security and trust in electronic communications but also promotes the digital economy’s growth by enabling secure and legally binding electronic transactions throughout the EU. This includes when an individual wants to sign a document with a QES and needs to be identified and verified for security purposes such as the following:

Identity verification: The identity of individuals must be verified before issuing a qualified certificate for electronic signatures. This can be done using automated methods but must meet high security and reliability standards. By putting these applications together, this guarantees that a high level of security and reliability is met when undergoing all types of document signing. Remote identification: Automated processes for remote identity verification are permissible under eIDAS, but they must ensure the same level of assurance as physical presence verification. Techniques may include video identification, use of eID cards or other secure methods depending on the specific EU country and industry. Unlocking the benefits of digital signatures – from security to sustainability.

From removing the friction and cost associated with manual processes, to preventing fraud and fueling growth, implementing digital signatures deliver a myriad of benefits, enabling businesses to:

Improve security: Digital signatures use cryptographic techniques to ensure the authenticity, integrity and non-repudiation of signed documents, making them highly secure and resistant to tampering or forgery. Over 70% of users report fewer security and compliance incidents. Fight fraud: In recent years, the global volume of digital fraud attempts has increased by 80% according to TransUnion. By using digital signatures, the identity of the sender is verified through a unique digital certificate that links the signature to the sender’s identity, making it difficult for fraudsters to impersonate or steal someone’s identity. Enhance user experience: With digital signatures, customer satisfaction is increased for more than 70% of users as the friction and frustration experienced in physical signing is removed, giving an enhanced, more streamlined user experience. Establish and build trust: Organizations that have implemented digital signatures report a 500% increase in customer loyalty. By ensuring the authenticity, integrity and non-repudiation of digital documents and transactions, digital signatures reassure users that their communications and transactions are protected from tampering and fraud. Cut costs: Without the need for printing, mailing and storing paper documents, digital signatures reduce hard costs by an average of 56%, creating a more efficient and cost-effective process. Go green: By reducing paper usage and transportation associated with physical document signing, digital signatures help reduce the environmental impact of traditional paper-based processes and could save up to 2.5 billion trees in less than 20 years. Achieve compliance: With nearly €1.87 trillion of global GDP tainted by money laundering each year, it is imperative for companies to meet all regulatory standards. Digital signatures, recognized as legally equivalent to handwritten signatures in many jurisdictions—including the EU’s eIDAS regulation and the US ESIGN Act—help businesses avoid fines and meet compliance requirements efficiently. Boost conversions: With users signing 79% of agreements within 24 hours, digital signatures enable automated signing workflows, allowing documents to be routed, reviewed and signed electronically – streamlining business processes, reducing bottlenecks and accelerating decision-making. Increase accountability: Digital signatures often include built-in audit trail capabilities, recording information such as the identity of the signer, the time and date of signing and any changes made to the document after signing. Because of this, companies witness an 80% reduction in signing errors, helping ensure accountability and transparency. Scale and drive growth: Digital signatures can be used globally, making it easier to conduct business across borders and collaborate with partners, suppliers and customers in different locations. It comes as no surprise that global e-sign transactions have risen from 89 million to 754 million in just over five years. Why sign with IDnow?

At IDnow, we provide comprehensive signing solutions tailored to meet the diverse needs of any business operating in today’s global digital economy. Whether you need fully automated, video, eID, or in-person, IDnow delivers a versatile, secure and enhanced user experience, ensuring that your customers sign on the dotted line, every time.

This even includes our newest signing solution InstantSign which issues a QES using any previous AML-compliant identity verification in seconds.

If reverification is needed due to an expired user’s identity document, then InstantSign works seamlessly with IDnow’s full range of identity verification solutions, keeping ident data up-to-date, and providing the perfect, compliant solution for financial services organizations.

Any ident, from any vendor, anytime—truly one of a kind.

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


auth0

DevDay 2024 Recap: What's New In Auth0?

Developer Day 2024 is a wrap and Auth0 announced some cool stuff! Let's recap some of them in this blog post.
Developer Day 2024 is a wrap and Auth0 announced some cool stuff! Let's recap some of them in this blog post.

uquodo

UAE’s 2024-27 AML Strategy: How uqudo supports Fraud Prevention

The post UAE’s 2024-27 AML Strategy: How uqudo supports Fraud Prevention appeared first on uqudo.

Indicio

Three ways decentralized identity delivers transformational Digital Public Infrastructure

The post Three ways decentralized identity delivers transformational Digital Public Infrastructure appeared first on Indicio.

  DPI — or Digital Public Infrastructure — is a new, white hot topic in development. In 2023, at a G20 meeting in New Delhi, India, DPI was

“…a set of shared digital systems that are secure and interoperable, built on open technologies, to deliver equitable access to public and/or private services at a societal scale.”

The Bill and Melinda Gates Foundation describe DPI as: 

“Like roads — a physical network essential for people to connect with each other and access a huge range of goods and services.”

In essence, how can governments invest in digital infrastructure that accelerates sustainable development — and what does that technology look like?

A trusted information superhighway?

If you’re thinking that maybe we’ve been here before, you’re half right. The internet and the web that came to sit on top of it utterly transformed how we interact, access and share information, and engage in economic and other activities. 

But the internet and web evolved without a crucial element — a verification layer for people and organizations, which has led to widespread identity fraud, privacy concerns,  and security breaches. So it’s not surprising to see digital identity systems being described as “foundational to DPI.”

Simply put, you can’t create a thriving, inclusive, innovative digital economy if you can’t trust that the person or organization you’re interacting with online is who they claim to be. Similarly, people won’t trust systems that can’t protect their personal data, not least their biometric data. 

This is why decentralized identity and Verifiable Credentials can be justifiably described as “game-changing” technology for DPI. Here, we explain three of the most important benefits 

1. Seamless authentication and data sharing

Decentralized identity means that people or organizations or devices hold their own data in secure Verifiable Credentials in digital wallets on mobile devices.  First, this eliminates the need for centrally storing personal or other valuable data in order for identity and verification to be managed. This removes a major security risk and provides data privacy. People can now consent to sharing their data. Companies and organizations are freed from onerous data privacy compliance.

Second, the source of a Verifiable Credential — the organization that issued — is always knowable. The data in the Verifiable Credential is digitally signed, which means that if someone tries to alter it, it will be automatically detected. Verifiable Credential data can be shared by creating a link or QR code from simple software on a mobile device and verified with simple software on the web or on a mobile device.

Add these all up and you have portable trust. You no longer have to engineer complex direct integrations to share data; and if you trust the source of the credential — say a bank, a business, or a government — you can act on the information instantly, because you know it hasn’t been altered. 

Decentralized Identity means information from anywhere can be verified anywhere. If this seems abstract, think of it in the context of, say, India, which has 63 million micro industries. With Verifiable Credentials, each of these economic actors can authenticate who they are interacting with and share data that can be trusted.

It gets better. By combining Verifiable Credentials with decentralized identifiers, people and organizations can authenticate and interact with each other directly, across secure communication channels (DIDComm). This communication protocol enables their mobile devices to take on the functionality of an API but with better security. Now they can integrate and use information in much more powerful ways.

Think of it as the capacity to create secure digital roads. These roads have no tolls. They are not owned by a platform, which means that the value created by digital interaction goes directly to those creating the value. These roads can be created from anyone to anyone, anywhere to anywhere.

2. Rescue biometric infrastructure from catastrophic failure

Biometrics are a powerful way to manage authentication: We bring our own, they don’t have to be remembered or constantly changed, and they’re fast to verify. For these reasons, they are being rapidly adopted everywhere. 

But though biometrics are supposed to replace passwords, they, unavoidably, have replicated one of the critical architectural weaknesses of password authentication: centralized storage. In order to verify a biometric template (essentially, a hash of a biometric), a verifying entity must store that template in a database.

Centralized storage has already led to catastrophic security failures, and the factor that makes biometrics so powerful — their uniqueness — turns their theft into an existential risk: You can reset a password, you can’t reset yourself, biometrically. 

Verifiable Credentials make, these problems go away — giving you all the benefits of biometrics without the need for centralized storage. 

Here’s how it works: When a person’s biometric is first captured during identity assurance, the biometric template is also rendered and issued to them as a Verifiable Credential. This means that when a person presents for a biometric scan, they also present their biometric VC. The verifying entity compares the scan to the template in the credential.  That’s it — all the benefits of biometric authentication without the need for centralized storage.

“Bring your own biometrics” also provides a way to deal with the problems of biometric fakery, whether using silicone masks or generative-AI “deepfakes.” By requesting a credential containing a biometric template, verifiers have a way to double check the person is who they really are.

3. Decentralized governance

In creating Digital Public Infrastructure for people to authenticate and share data, it is impossible to know in advance every possible way they will use it to create value. 

Decentralized Ecosystem Governance is a simple way for the entity responsible for each use case to implement the governance rules it needs for its roads work and be accountable to its users (e.g., which credential issuers can be trusted, what information flows are needed). 

This way of implementing governance (through machine-readable files that are propagated to each participant’s credential software) has the information-handling capacity to meet whatever variety the system throws at it, not least by virtue of localizing the governance decision-making and making it easy to implement changes based on feedback. 

For example, when we worked with the government of Aruba to implement Verifiable Credentials for Covid testing, the government needed to be able to rapidly change verification workflows based on new scientific information (e.g., test type, test validity times). Decentralized Ecosystem Governance was developed to meet this need. It has been developed into a full specification for governance by the Decentralized Identity Foundation (DIF).

As Digital Public Infrastructure, decentralized identity combined with decentralized governance gives people and entities the control they need to create frictionless ecosystems, the ability to rapidly respond to feedback, and clear accountability.

Lightweight and resilient

All these solutions can be implemented in a matter of weeks and without the eye-watering costs normally associated with infrastructure projects. That’s because Verifiable Credentials can work with rather than require replacing existing systems. And, because they are based on interoperable standards and open-source code, they can unify disparate systems.

This effectively makes decentralized identity a universal DPI layer for seamless authentication and data sharing, and one that can scale easily and start generating network effects rapidly. 

Not every version of decentralized identity delivers the best possible combination of benefits. To learn more about the options you have, and how our government customers are using this technology to drive digital transformation, contact us and book a free, no-obligation workshop where we’ll analyze and discuss  your use case.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Three ways decentralized identity delivers transformational Digital Public Infrastructure appeared first on Indicio.


Ontology

Telegram’s Policy Shift: The Need for Decentralization and Stronger Privacy Protections

The arrest of Telegram CEO Pavel Durov and the platform’s subsequent decision to provide user data to authorities has sparked widespread concern, not just among privacy advocates but also in political dissident communities. This moment marks a critical turning point in the ongoing debate about balancing privacy and regulation in digital spaces. But beyond Telegram’s headlines lies a broader narrat

The arrest of Telegram CEO Pavel Durov and the platform’s subsequent decision to provide user data to authorities has sparked widespread concern, not just among privacy advocates but also in political dissident communities. This moment marks a critical turning point in the ongoing debate about balancing privacy and regulation in digital spaces. But beyond Telegram’s headlines lies a broader narrative — one about centralized platforms, their vulnerabilities, and the growing urgency for decentralization and self-sovereign identity.

The Fallout of Durov’s Arrest

As reported recently, Durov’s arrest at a Paris airport and the criminal charges he now faces have cast a spotlight on the inherent risks of centralized platforms. Telegram, which has been lauded as a beacon for privacy and free speech, now finds itself caught between the demands of law enforcement and the privacy expectations of its nearly billion-strong user base.Telegram’s new policy — to hand over user data like IP addresses and phone numbers to authorities with valid legal requests — marks a significant shift. The app, once seen as a safe haven for political dissidents, journalists, and activists in oppressive regimes, is now under scrutiny. Critics question whether this change will make Telegram more susceptible to the influence of repressive governments, undermining the platform’s core mission of protecting user privacy.

But Durov’s predicament is not just about Telegram. It’s a wake-up call for the entire digital ecosystem and a reminder of how centralized platforms are vulnerable to external pressures — from governments, corporations, or even internal mismanagement.

Centralization’s Fatal Flaw

As I previously discussed in my article, “The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections,” the key issue with centralized systems is their susceptibility to single points of failure. Whether it’s the CEO of a company being detained or a server being seized, centralized platforms are fragile by design. The arrest of Durov underscores how much risk is embedded in centralized models. When the figurehead or infrastructure of a platform is compromised, so too is the privacy and security of its entire user base.

Telegram’s decision to share user data highlights the thin line that centralized platforms walk. Their leadership can be coerced, their systems can be hacked, and their policies can be bent to serve the interests of governments, often at the expense of user privacy. This is where decentralization steps in as a necessary solution.

Decentralization: The Answer to Protecting Privacy

In contrast, decentralized systems are designed to be resistant to these kinds of pressures. As I explored in “Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms,” platforms built on decentralized frameworks lack a central authority that can be easily compromised or coerced. Instead, they rely on distributed networks that empower users with control over their data and communication.For instance, decentralized identity (DID) is a transformative technology that allows individuals to own and manage their identities across platforms without needing to rely on a centralized entity like Telegram. With DID, there’s no single point of failure; no CEO can be arrested, no server can be seized, and no government can force a handover of user data. Users control their own credentials, and privacy becomes a fundamental right, not a privilege that can be revoked.

The recent developments at Telegram highlight how critical it is to shift toward decentralized identity systems. When platforms have no central control, they also become inherently more resistant to censorship and government overreach. In an era where governments are increasingly using the guise of regulation to invade privacy, decentralized platforms are not just a better alternative — they are becoming a necessity.

Striking a Balance: Decentralization with Responsibility

Of course, decentralized systems are not without their challenges. As we’ve seen with platforms like Silk Road and Tornado Cash, the anonymity offered by decentralization can sometimes provide a haven for illegal activities. This tension between freedom and responsibility was a central theme in my article on decentralized identity and reputation systems. While decentralized platforms offer privacy and autonomy, they also need systems of accountability.

One potential solution lies in decentralized reputation systems, where users build a reputation based on their actions within the network. This could help decentralized platforms self-regulate, ensuring that while privacy is protected, bad actors are held accountable. Such systems would allow users to engage with decentralized platforms anonymously while maintaining a level of trust and integrity within the community.

The Bigger Picture: What Telegram’s Shift Means for the Future of Privacy

The policy change at Telegram, combined with the increasing governmental pressure on platforms like it, underscores an uncomfortable truth: centralized platforms can no longer guarantee privacy. Whether it’s through government demands or corporate policy shifts, the privacy of users on centralized systems is always at risk.

This is why the shift toward decentralization and self-sovereign identity is so crucial. The power to control personal data and communications needs to be in the hands of the users, not corporations or governments. Telegram’s recent actions should serve as a wake-up call for anyone concerned about their digital privacy. As we move forward, decentralized platforms and identity systems are not just desirable — they are essential to preserving our freedoms in the digital age.

Conclusion: A Call to Decentralize

The arrest of Pavel Durov and Telegram’s subsequent policy shift have set the stage for a larger conversation about the future of privacy and free speech. In a world where centralized platforms are increasingly vulnerable to government overreach, it’s clear that decentralization is the path forward.

If we want to maintain control over our digital lives, we must embrace the technologies that enable it — decentralized identity, staking, and reputation systems. As governments and corporations continue to tighten their grip on the internet, decentralization may be the only way to keep our digital freedoms intact.

Interested in learning more about decentralized identities and how they can revolutionize transparency in venture capital? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Telegram’s Policy Shift: The Need for Decentralization and Stronger Privacy Protections was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


HYPR

PCI DSS 4.0 Authentication Requirements: 5 Things to Know

The Payment Card Industry Security Standards Council recently updated their Data Security Standard (PCI DSS) for protecting payment card data. The latest version, PCI DSS 4.0, introduces more than 60 new or updated requirements, with new directives around passwords and multi-factor authentication (MFA) among the most consequential.

The Payment Card Industry Security Standards Council recently updated their Data Security Standard (PCI DSS) for protecting payment card data. The latest version, PCI DSS 4.0, introduces more than 60 new or updated requirements, with new directives around passwords and multi-factor authentication (MFA) among the most consequential.

What is PCI DSS 4.0?

First introduced in 2004, the PCI DSS guidelines apply to any organization that stores, processes or transmits cardholder data. To demonstrate PCI DSS compliance, organizations undergo assessment on all systems that interact with the cardholder environment.

In March 2022, the Council announced PCI DSS version 4.0, providing guidelines that aim to better secure account holder and payment card data within today’s evolving cyberthreat landscape. Organizations are required to implement PCI DSS 4.0 guidelines in two phases. The first phase deadline was March 31, 2024 and included 13 new mandatory requirements. The next deadline is March 31, 2025, at which time another 51 new requirements, which were only recommendations in the first phase, become mandatory.

While version 4.0 contains updates across the board, some of the most significant relate to strong authentication requirements, specifically password usage and multi-factor authentication (MFA). Weak forms of authentication leave organizations and data vulnerable to brute force attacks, credential phishing and multiple other password-related attacks. Understanding these new requirements is key for PCI DSS compliance. We look at five critical areas as well as their potential impact for your business.

1. PCI DSS 4.0 Password Requirements

One of the most significant updates in PCI DSS version 4 involves stricter specifications regarding passwords. Key PCI DSS 4.0 password requirements (sections 8.3.4-8.3.9) include:

Length and Complexity: Passwords must be at least 12 characters long and use special characters, uppercase, and lowercase letters. Reset and Re-Use: Passwords need to be reset every 90 days. An exception is made if continuous, risk-based authentication is used, where the security posture of accounts is dynamically analyzed, and real-time access is automatically determined accordingly. Limited Login Attempts: According to PCI DSS 4.0 password requirements, after a maximum of 10 unsuccessful login attempts, users should be locked out for at least 30 minutes or until they verify their identity through the help desk or other means. Potential impact of the PCI DSS 4.0 password requirements

Longer passwords are more onerous for users and are more likely to be written down or insecurely saved in files on a device. Forced updates also tend to trigger unsafe user behaviors as people often make only minor changes that hackers are likely to guess. Moreover, all these requirements are likely to result in higher help desk calls. Recent research from Forrester and HYPR shows that the average help desk call costs organizations $42.50/call.

2. MFA Required for All Access to the CDE

Under PCI DSS 3.2.1 guidelines, MFA was required only for administrators accessing the cardholder data environment (CDE). Under the new PCI DSS MFA rules (8.4.2), all access to the CDE must be gated by multi-factor authentication. The MFA requirements apply for all types of system components, including cloud, hosted systems, and on-premises applications, network security devices, workstations, servers and endpoints.

Multi-factor authentication is defined as using two independent factors from the categories:

Something you know, such as a password or passphrase.  Something you have, such as a token device or smart card.   Something you are, such as a biometric element 

In its guidance on authentication factors, Version 4.0 specifically says to look at FIDO (Fast IDentity Online) for the use of  tokens, smart cards, or biometrics as authentication factors. While it stops short of requiring FIDO-based factors, some of its other guidance, as you will see below, points to a clear preference.

Potential impact

The new regulations make clear that multi-factor authentication must be used every time the CDE is accessed, even if a user already used MFA to authenticate into the network under the remote access requirements (see below). This will add significant friction for workers, with potential consequences for both productivity and employee satisfaction. Moreover, most organizations, even if they already use some form of MFA, do not have the correct technology or systems to address the requirement for MFA for desktops, workstations and servers.

3. PCI DSS Now Requires MFA for All Remote Access

Previously, MFA was required for remote access to the cardholder data environment. With this updated PCI DSS MFA guidance, anyone logging in from outside your secured network perimeter, even if they are not actually accessing the CDE, must use multi-factor authentication. This includes all employees, both users and administrators, and all third parties and vendors. This also means that any web-based access must use MFA, even if used by employees on site.

Potential impact

Effectively this means that all of your workforce that are remote, hybrid or have supporting roles outside the organization must use MFA at all times. It also means that any employee using a web-based application to access your networks and systems must use MFA, even if they are on site. In addition to the cost and IT burden of implementing MFA, cumbersome MFA procedures can negatively impact both employee productivity and satisfaction.   

4. PCI DSS MFA Configuration Requirements

The new standard doesn’t just cover who must use MFA and when, it also introduces guidelines on how MFA systems must be configured to prevent misuse. Many traditional MFA solutions are susceptible to man-in-the-middle, push bombing and other attacks that bypass MFA controls. Requirement 8.5 specifies weaknesses and misconfigurations to assess for PCI compliance. These include: 

Your MFA system must not be susceptible to replay (aka man-in-the-middle) attacks. MFA must not be able to be bypassed unless a specific exception is documented and authorized by management Your MFA solution must use two different and independent factors for authentication Access cannot be granted until all authentication factors are successful

As discussed earlier, the PCI DSS guidance on types of authentication factors makes reference to FIDO-based authentication. FIDO authentication is phishing-resistant, eliminates replay attacks and, depending on the FIDO solution, is inherently multi-factor.

Potential impact

If your MFA solution uses SMS, OTPs or other insecure methods, it may not meet PCI compliance requirements.

5. Strong Cryptographic Protocols

While earlier versions of PCI DSS required the use of strong cryptographic protocols to protect transactions and cardholder data, PCI DSS 4.0 extends the cryptographic requirement. With the new rules, any stored sensitive authentication data (SAD) must be encrypted using strong cryptography. 

Potential impact

If your authentication system doesn’t properly encrypt and securely store authentication data, then it may not meet PCI compliance requirements.

PCI DSS Section 8.3.3

It's worthwhile to call out another critical provision of PCI DSS, which though not new, is receiving renewed attention. Section 8.3.3 (previously section 8.2.2) mandates that the user identity is verified before modifying any authentication factor. This is intended to prevent social engineering attacks that target the credential reset / account recovery process. 

 How identity verification can stop help desk social engineering

Meet PCI DSS 4.0 Compliance With HYPR

The new PCI DSS framework now aligns much more closely with the NIST SP 800-63B Digital Identity Guidelines, guidance from CISA and the OMB, and other regulatory agencies that urge the adoption of FIDO-based phishing-resistant MFA and a Zero Trust authentication approach.

HYPR helps organizations comply with PCI DSS MFA requirements as well as multiple other provisions included in the standard. HYPR replaces the traditional password-based approach with secure passwordless authentication that is certified by FIDO and based on passkeys. Core elements of the solution, such as the incorporation of biometric authentication, possession of a trusted device, and cryptographic tokens securely stored on the device TPM or secure enclave, ensure strong, phishing-resistant multi-factor authentication that meets PCI DSS requirements. HYPR also provides secure self-service methods to verify identity for account recovery.

At the same time, HYPR greatly improves the user experience, eliminating the need for long, complex passwords and streamlining multi-factor authentication to a single user gesture. 

To learn how HYPR can help your organization meet PCI DSS 4.0 requirements, contact one of our compliance experts.

FAQs

1. What is PCI DSS 4.0, and why was it introduced?
PCI DSS 4.0 is an updated version of the Payment Card Industry Data Security Standard, announced in March 2022. It aims to enhance the security of cardholder data in response to the evolving cyberthreat landscape. It introduces new requirements, especially in areas such as strong authentication and multi-factor authentication (MFA), to better protect sensitive payment information.

2. What are the key changes in password requirements under PCI DSS 4.0?
Under PCI DSS 4.0, passwords must be at least 12 characters long and include a mix of special characters, uppercase, and lowercase letters. Passwords need to be reset every 90 days unless continuous, risk-based authentication is implemented. Additionally, accounts are locked after 10 unsuccessful login attempts, requiring identity verification for re-entry.

3. How does PCI DSS 4.0 impact multi-factor authentication (MFA) requirements?
PCI DSS 4.0 mandates MFA for all access to the cardholder data environment (CDE), not just administrators. This includes cloud, on-premises, and network components. MFA is also required for any remote access, even if employees are on-site but using web-based systems. The MFA system must be configured to resist attacks such as man-in-the-middle attacks and replay attacks.

4. What is the deadline for implementing PCI DSS 4.0 requirements?
The PCI DSS 4.0 guidelines are being rolled out in two phases. The first deadline, March 31, 2024, included 13 new mandatory requirements. The second phase, with an additional 51 requirements, must be fully implemented by March 31, 2025.

Editor's Note: This blog was originally published August 2023 and has been updated to reflect current timelines and provide additional information.

Tuesday, 24. September 2024

KuppingerCole

Navigating Data Challenges: Unlocking Power of Data Marketplaces

Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advan

Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advantage and operational efficiency in today's fast-paced business environment. Data marketplaces – platforms that connect data producers of specific data products with data consumers who can leverage them for their own goals and projects – are an emerging technology that can power such strategies.

Join experts from KuppingerCole Analysts and Immuta as they discuss how data marketplaces address challenges in data management. They will explain how this approach can enhance data access control and internal sharing, provide a centralized platform for managing data assets, help break down silos, ensure compliance, streamline governance, improve security, and foster innovation, driving business success in a data-driven world.

Alexei Balaganski, Lead Analyst at KuppingerCole Analysts, will provide an overview of the risks and challenges in managing sensitive data at the enterprise level amidst the evolving compliance landscape. He will discuss how to balance security with accessibility and productivity, offering insights on reducing data friction while meeting regulatory requirements.

Bart Koek, Field CTO at Immuta, will discuss strategies for promoting efficient and compliant data sharing, present practical use cases, explore best practices from real-world implementations of data marketplaces at leading organizations, and provide an overview of Immuta’s Data Security Platform.




liminal (was OWI)

The Business Case for Customer Identity and Access Management in E-Commerce

The post The Business Case for Customer Identity and Access Management in E-Commerce appeared first on Liminal.co.

auth0

Authtoberfest 2024 is Here!

Join us this October as we celebrate Hacktoberfest 2024 by encouraging developers to contribute to the open-source community.
Join us this October as we celebrate Hacktoberfest 2024 by encouraging developers to contribute to the open-source community.

Infocert

Download page: Infocert IDC Vendor Profile

Download page: Infocert IDC Vendor Profile Thank you for filling out your information! Click or tap on the image to view and download the presentation: The post Download page: Infocert IDC Vendor Profile appeared first on infocert.digital.
Download page: Infocert IDC Vendor Profile

Thank you for filling out your information!

Click or tap on the image to view and download the presentation:

The post Download page: Infocert IDC Vendor Profile appeared first on infocert.digital.


Ocean Protocol

Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption

Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption Today we are excited to announce the upcoming integration of Ocean Nodes with Oasis Sapphire, to enhance encryption, security, and privacy across the network, while also introducing a revamped incentives program that will better reward node operators for their contributions. By integrating the Oasis SDK, we are t
Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption

Today we are excited to announce the upcoming integration of Ocean Nodes with Oasis Sapphire, to enhance encryption, security, and privacy across the network, while also introducing a revamped incentives program that will better reward node operators for their contributions. By integrating the Oasis SDK, we are taking another step towards achieving our goal of democratized computing, empowering everyone to create and use AI without sacrificing privacy or control.

This post will cover the information on the Oasis Sapphire integration, the improvements to our encryption model, and the exciting overhaul of our incentives program.

In the past, encryption on the Ocean Network relied on a private key stored by the desired Ocean Provider. This system had two major drawbacks:

Trust Issues: Users had to trust the provider to keep the encryption key secure. Provider Dependency: If the provider went offline, the user had to republish the asset with a new provider.

The integration of Oasis Sapphire eliminates these concerns by decentralizing encryption, ensuring that no single node holds undue control. With Sapphire, encryption is managed across the network, providing a more secure and resilient system.

Additionally, we’ve introduced NFT-based trusted node lists, allowing anyone to create a trust list. Only nodes on that list can decrypt and serve assets, ensuring enhanced security and reducing the need to trust individual providers.

Incentives Overhauled: Moving to Oasis Sapphire

Since the launch of the Ocean Nodes in August, we’ve received a lot of positive feedback from you, and the number of nodes grew beyond our expectations–currently sitting at 24,598 nodes (at the time of publication).

During Epoch 2 (Week 37), we realized that 81% of nodes received under 1 FET in rewards, and more than $1,000 in gas fees were spent–which we thought should be put to better use, that is to be redirected to you. Keep reading.

To resolve this, we got to thinking and calculating and we overhauled the incentives program, bringing with it a host of improvements:

Move the incentive program to Oasis Sapphire: this will reduce gas fees and leverage the existing tech stack. Incentives in ROSE: Future incentives will be distributed in ROSE, the native token of the Oasis Network. Uptime Criteria: To encourage stable and reliable nodes, only nodes with at least 90% uptime will be eligible for rewards each week (lowered from the industry standard of 95%). The total incentive pool will be split evenly between all eligible nodes. 250,000 ROSE per Epoch: We will distribute a total of 250,000 ROSE per epoch, giving significant incentive to maintain high availability. That’s >2x of rewards.

We’ll let you focus on point 4 of the above for a while.

Now, of course this requires a bit of time and work on our end, and patience on your end. This means that, in order for all of the above to be achieved and to make the transition to Oasis Sapphire as smooth as possible, we are temporarily pausing the distribution of incentives. Monitoring will continue, and your rewards will accumulate during this period. Once the move to Sapphire is complete, the cumulated rewards will be converted into ROSE and distributed to eligible nodes.

For Epochs 3 and 4 the allocation and eligibility criteria will remain the same, but incentives will not be distributed at the end of the epochs. As said, after we transition to Oasis Sapphire, we will distribute the previously cumulated rewards in ROSE, ensuring no one misses out on their well-earned rewards.

Summary: What Node Operators Need to Know Higher rewards for reliable nodes: Nodes that maintain 90% uptime or higher will qualify for rewards, with a significant reward pool of 250,000 ROSE per epoch. Rewards will accumulate: Even though incentives will not be distributed until the integration is complete, your rewards will still be tracked and cumulated. ROSE Rewards: Once incentives resume, rewards will be distributed in ROSE, benefiting from lower gas fees and efficient distribution.

The transition to Oasis Sapphire represents a major leap forward in our commitment to decentralization, privacy, and security. With better incentives (250K ROSE), more efficient rewards, and a fully decentralized encryption system, Ocean Nodes are now positioned to support the future of AI and data sharing on a global scale.

We are excited to see how this new phase empowers our community, and we remain dedicated to providing a network that rewards active participation and maintains the highest standards of decentralization and privacy.

Set up your node today and be part of the next chapter in decentralized AI!

Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny

The post AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny appeared first on Tokeny.

LUXEMBOURG, 24th September 2024 – AMA-AMBIOGEO, a pioneering mining Joint Venture (JV) with a focus on sustainable resource management, announces the tokenization of $4.6 billion in gold reserves, using Tokeny’s technology to transform previously inaccessible real-world assets (RWA) into tradable and compliant digital securities.

The JV between AMA Resources and AMBIOGEO adds up 70 years of mining experience and fuses the talent of people distributed worldwide to change the financial capability of miners in South America, becoming one of the largest and most important exploration companies in that region and standing out as a unique player in the natural resources sector.

The Supernova Project is the inaugural venture in which AMA-AMBIOGEO has tokenized its gold reserves. This project combines two reserves located in northern Brazil: Supernova and Riacho Seco, holding a total of 474 metric tons of gold certified under the S-K 1300 standard, with an economic value of $36.8 billion at the time of tokenization. The discounted cash flow (DCF) or present value given to the asset was, however, 12.5% or $4.6 billion, accounting for extraction costs, current state of the mines and time value of money. The asset has been transferred to a Wyoming LLC of which its equity securities have been tokenized and are being promoted through a Private Placement offering under SEC Regulation D, Rule 506(c), and Regulation S for non-US investors.

AMA-AMBIOGEO’ sustainability model leverages tokenization to unlock the financial value of gold reserves while allowing most of the minerals to remain in the ground. By converting these reserves into digital securities, investors can own a stake without the need for physical extraction, preserving the environment and providing liquidity through fractional ownership.

Unlike owning physical gold or land, co-ownership of tokenized proven gold reserves offers a sustainable way to unlock value without extraction. Our goal is to modernize the mining industry through innovation and sustainability. Partnering with Tokeny, we’re bringing $4.6 billion in gold reserves onchain, offering a digital experience with features like self-custody, transferability, and collateralization, capabilities that were never before available to investors. Ernesto BernadetCEO of AMA Resources

Tokeny’s role as the technology provider for this innovative project ensures secure, compliant, and efficient tokenization, using the ERC-3643 standard to ensure compliance and interoperability. It lays the foundation for broad distribution in multiple marketplaces and future integration with DeFi platforms to enable innovative features yet to come.

Compliance is the backbone of RWA tokenization, and AMA-AMBIOGEO’s dedication to enforcing it onchain sets them apart. We’re proud to support their efforts with our onchain operating system, enabling them to issue, manage, and distribute permissioned ERC-3643 tokens that only qualified investors can access and trade, all while keeping the door open for DeFi innovation. Luc FalempinCEO Tokeny About AMA Resources and AMBIOGEO

AMA Resources is a dynamic corporation headquartered in Florida (US). It holds ownership of nineteen properties, totaling approximately 22,101 hectares (equivalent to 54,613 acres). In addition, it holds several gold and copper concessions in Argentina. These strategically located properties boast proximity to essential infrastructure, including water, electricity, and transportation access via land, rail, or sea. Its core focus lies in exploration financing, leveraging the innovative concept of tokenization.

AMBIOGEO is a mining company committed to sustainability and social responsibility, located in Parnamirim, Rio Grande do Norte, but active in all regions of Brazil. The company operates mainly in various professional, scientific and technical activities, with a significant focus on environmental and geological consulting.

By harnessing blockchain technology, they aim to transform proven mineral reserves into tradable digital assets. Their mission: to unlock value, enhance liquidity, and empower investors in the natural resources landscape.

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny appeared first on Tokeny.


auth0

Level Up: Auth0 Plans Just Got an Upgrade

We’ve leveled up our Free, Essential, and Professional plans.
We’ve leveled up our Free, Essential, and Professional plans.

KuppingerCole

Understanding the Opposition

by Anne Bailey What to do now to prepare for the future Earlier this year, KuppingerCole published Strategic Cybersecurity Recommendations for 2024-2033. Analysts at KuppingerCole conducted scenario-based research on the most critical trends, risks, and opportunities of the next ten years, which yielded the recommendations we present in the paper. One of the recommendations we make is to know

by Anne Bailey

What to do now to prepare for the future

Earlier this year, KuppingerCole published Strategic Cybersecurity Recommendations for 2024-2033. Analysts at KuppingerCole conducted scenario-based research on the most critical trends, risks, and opportunities of the next ten years, which yielded the recommendations we present in the paper.

One of the recommendations we make is to know the opposition.

Know the Opposition

The paper identified a range of threats which must be first identified before effective mitigation action can be taken. Taken at a geopolitical level, different countries and regions will have different patterns of development over the next ten years, some taking more protectionist stances and others open and collaborative, with of course many varieties in between. These different environments foster different types of economic development... and crime.

Businesses operating in each environment must strive to understand the malicious actors that thrive in that environment, as well as their motivations. Are the conditions right for lone wolf attacks, state-sponsored attacks, or even corporate-on-corporate attacks? Are they seeking financial gain, disruption, or influence? The answers to these questions should help shape a unique defense strategy.

How to Know

Chief Information Security Officers (CISOs) must know the opposition and should seek to do so by understanding the environment and context that cause malicious actors to attack. There are of course many ways to do this. We recommend having incident response plan(s) that address the evolving threats and threat actors, and scenario planning the threats that are particular to your region, industry, and business.

One place to do that is at cyberevolution in Frankfurt, Germany in December this year. There is a track on understanding the opposition, covering quantum threats, threat intelligence, business models behind common attacks, and much more. Take proactive steps to understand the threats by joining the cyberevolution.

Monday, 23. September 2024

Microsoft Entra (Azure AD) Blog

Move to cloud authentication with the AD FS migration tool!

We’re excited to announce that the migration tool for Active Directory Federation Service (AD FS) customers to move their apps to Microsoft Entra ID is now generally available! Existing customers can begin updating their identity management with more extensive monitoring and security infrastructure by quickly identifying which applications are capable of being migrated and assessing all

We’re excited to announce that the migration tool for Active Directory Federation Service (AD FS) customers to move their apps to Microsoft Entra ID is now generally available! Existing customers can begin updating their identity management with more extensive monitoring and security infrastructure by quickly identifying which applications are capable of being migrated and assessing all their AD FS applications for compatibility. If you don't have an Entra ID account, you can still access the Migrate AD FS to Microsoft Entra ID guide to see what a migration would look like for your organization.

 

In November we announced AD FS Application Migration would be moving to public preview, and the response from our partners and customers has been overwhelmingly positive. For some, transitioning to cloud-based security is a daunting task, but the tool has proven to dramatically streamline the process of moving to Microsoft Entra ID. 

 

A simplified workflow, reduced need for manual intervention, and minimized downtime (for applications and end users) have reduced stress for hassle-free migrations. The tool not only checks the compatibility of your applications with Entra ID, but it can also suggest how to resolve any issues. It then monitors the migration progress and reflects the latest changes in your applications. Watch the demo to see the tool in action.

Moving from AD FS to a more agile and responsive, cloud-native solution helps overcome some of the inherent limitations of the old way of managing identities.

 

In addition to more robust security, organizations count greater visibility and control with a centralized, intuitive admin center and reduced server costs as transformative benefits of moving to a modern identity management. Moreover, Entra ID features can help organizations achieve better security and compliance with multifactor authentication (MFA) and conditional access policies—both of which provide a critical foundation for Zero Trust strategy.  

 

More Entra ID features include:

Passwordless and MFA for better user experience. A rich set of apps, APIs, SDKs, and connectors for customization and extensibility. Granular adaptive access controls to define and monitor conditional access. Self-service portals that allow employees to securely manage their own identity.

 

Want to learn more about Microsoft Entra? Get the datasheet and take a tour here. Ready to get started? Visit Microsoft Learn and explore our detailed AD FS Application Migration guide. 

 

Have any questions or feedback? Let us know here.  

 

Melanie Maynes

Director of Product Marketing

 

 

For a comprehensive overview of the migration tool and its capabilities, check out these other resources:

Overview of AD FS application migration - Microsoft Entra ID | Microsoft Learn Use the AD FS application migration to move AD FS apps to Microsoft Entra ID - Microsoft Entra ID | Microsoft Learn Demo: Effortless Application Migration Using Microsoft Entra ID | OD03 (youtube.com)  Best practices to migrate applications and authentication to Microsoft Entra ID - Microsoft Entra | Microsoft Learn Customer Case Study: Microsoft Customer Story-Universidad de Las Palmas de Gran Canaria boosts accessibility with Microsoft Entra ID  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

Explore the key benefits of Microsoft Entra Private Access

The traditional network security models are becoming increasingly ineffective in a world where remote work and cloud services are the norm. Conventional technologies like VPNs, while popular, offer limited protection in a boundary-less landscape, typically granting users excessive network access and posing significant risks. If compromised, these can lead to unauthorized access and potentially lat

The traditional network security models are becoming increasingly ineffective in a world where remote work and cloud services are the norm. Conventional technologies like VPNs, while popular, offer limited protection in a boundary-less landscape, typically granting users excessive network access and posing significant risks. If compromised, these can lead to unauthorized access and potentially lateral movement within corporate networks, exposing sensitive data and resources. Microsoft Entra Private Access is at the forefront of addressing these challenges by effectively integrating identity and network access controls.

 

Microsoft Entra Private Access

 

In July we announced general availability of Microsoft Entra Suite, which brings together identity and network access controls to secure access to any cloud or on-premises application or resource from any location. We also announced Microsoft’s Security Service Edge (SSE) solution general availability. Microsoft Entra Private Access, a core component of Microsoft’s SSE solution, allows you to replace your VPN with an identity-centric Zero Trust Network Access (ZTNA) solution to securely connect users to any private resource and application without exposing full network access to all resources. It’s built on Zero Trust principles to protect against cyber threats and mitigate lateral movement. Through Microsoft’s global private network, give your users a fast, seamless, edge-accelerated access experience that balances security with productivity.

 

Figure 1: Secure access to all private applications, for users anywhere, with an identity centric ZTNA

 

Modernize access to private applications

 

Despite the cloud’s growing dominance, you may still rely on on-premises infrastructure and use legacy VPNs to enable your remote workforce. Legacy VPNs typically grant excessive access to the entire network by making the remote user’s device part of your network.

 

Figure 2: Legacy VPNs typically grant excessive access to the entire network

 

Microsoft Entra Private Access helps you easily start retiring your legacy VPN and level up to an identity-centric ZTNA solution that helps reduce your attack surface, mitigates lateral threat movement, and removes unnecessary operational complexity for your IT teams. Unlike traditional VPNs, Microsoft Entra Private Access protects access to your network for all your users— whether they are remote or local, and accessing any legacy, custom, modern, or private apps that are on-premises or on any cloud.

 

Figure 3: Replace legacy VPN with an identity centric ZTNA solution

 

For example, Microsoft Entra Private Access enhances security for Remote Desktop Protocol (RDP) sessions by enabling access without direct network connectivity. It leverages Conditional Access policies, including multifactor authentication (MFA), to validate both device and user identities. This ensures that only authenticated users with compliant devices can establish an RDP session on your network, providing a secure and seamless remote access experience. By integrating with Microsoft Entra ID, Microsoft Entra Private Access validates access tokens and connects users to the appropriate private server, reinforcing the security posture without the need for traditional VPN solutions.

 

 

Accelerate your journey to Zero Trust with Microsoft Entra Private Access

 

Microsoft Entra Private Access helps you accelerate your journey to ZTNA and meets this need by offering a streamlined approach to help enforce least privilege access to on-premises or private applications, reinforcing the importance of extending Zero Trust principles to any private app(s) or resource(s), regardless of their location — on-premises or any cloud.

 

Figure 5: Accelerate your ZTNA journey with Microsoft Entra Private Access

 

Here, in more detail, are the key capabilities that help you move from legacy VPNs to ZTNA:

 

QuickAccess policy simplifies transitioning from legacy VPNs to easily onboard with Microsoft Entra Private Access. It allows you to create network segments that can include multiple apps and resources.

 

Figure 6: Fast and easy migration from legacy VPNs with Quick Access policy

 

Over time, Private Application Discovery enables you to discover all your private apps, onboard them to enable segmented access, and simplify enabling the creation of Conditional Access policies for groups of apps based on business impact levels.

 

Figure 7: Automatic private application discovery and onboarding

 

Enforce Conditional Access across all private resources

 

To enhance your security posture and minimize the attack surface, it’s crucial to implement robust Conditional Access controls, such as MFA (biometric and/or phish resistant), across all private resources and applications including legacy or proprietary applications that may not support modern identity.

 

The familiar Conditional Access policies used today can now be extended to all private apps, including legacy apps and non-web resources, such as RDP, SSH, SMB, SAP, or any other TCP- or UDP-based private application, resource, or network endpoint.

 

Figure 8: Enforce Conditional Access across all private resources

 

Conditional Access is applied to every network flow, ensuring comprehensive security coverage across all your private apps and resources—including MFA, location-based security, advanced segmentation, and adaptive least-privilege access policies—without making any changes to your apps or resources.

 

 

Deliver seamless access to private apps and resources with single sign-on

 

Single sign-on (SSO) simplifies the user experience by eliminating the need to sign in to each private application individually. By enabling SSO, users gain seamless access to all necessary private applications, whether located on-premises or across various clouds, without the need for repeated authentication or modifications to existing apps.

 

Microsoft Entra Private Access further streamlines this process by providing SSO for on-premises resources, utilizing Kerberos for secure, ticket-based authentication. For an even more integrated experience, you can opt to implement Windows Hello for Business with cloud Kerberos trust, offering a modern, passwordless sign-on option for users. This cohesive approach to SSO, supported by Microsoft Entra Private Access, ensures a secure and efficient access management system for private resources across the enterprise landscape.

 

Deploy across various platforms, ports, and protocols

 

Enable secure connectivity to private resources from Windows and Android, with support for iOS and MacOS coming later this year, and Linux support to follow. This service spans all operating systems and accommodates any port and protocol, including SMB, RDP, FTP, SSH, SAP, printing, and all other TCP/UDP-based protocols. For security teams already using an Application Proxy, you can seamlessly and confidently transition to Microsoft Entra Private Access knowing that all existing use cases and access to existing private web applications will keep working with no disruption.

 

 

Securing just-in-time access to sensitive resources

 

Microsoft Entra Private Access tightly integrated with Privileged Identity Management (PIM), a service within Microsoft Entra ID Governance, helps you secure just-in-time access to private resources for privileged users. This integration ensures that privileged access is granted only when necessary, aligning with the Zero Trust principle of least privilege access. It allows for the enforcement of robust Conditional Access controls such as MFA, to ensure that only eligible and validated users can access sensitive resources. This approach not only enhances security but also supports compliance and auditing requirements by providing detailed tracking and logging of privileged access requests.

 

Secure access to Azure managed services with Microsoft Entra Private Access

 

Azure offers many managed services, such as Azure SQL, Azure Storage, and Azure ML, among others. Microsoft Entra Private Access ensures a secure, private connection to Azure services while enforcing security policies and posture during access, allowing you enforce Conditional Access controls such as MFA and IP-based access controls. With comprehensive enforcement of identity and network access controls, Microsoft Entra Private Access ensures that managed services are accessed securely. Here are two key scenarios:

 

Secure Azure managed services access: Typically, Azure services are accessed over the internet. However, for security reasons, it’s preferable to keep the traffic between users or applications and Azure services private, avoiding exposure to the internet. This can be achieved through Microsoft Entra Private Access, where services like Azure Storage can be connected to a virtual network (vNet) using Private Link. This ensures that all traffic remains private, while additional identity and network access controls are enforced.

Figure 11: Enable secure access to Azure Storage with Private Access through Private Link

 

Service endpoint for controlled access: In contrast to Private Link, the service endpoint method does not integrate services into a vNet. Instead, it restricts incoming traffic to connections from specified connector IP addresses through Microsoft Entra Private Access. This approach helps secure access to Azure services by permitting access solely through an approved path, where additional security measures like MFA and device posture can be enforced.

Figure 12: Ensures a single, secure path to the Azure managed services through Microsoft Entra Private Access

 

Simplify Microsoft Entra private network connector  deployment for your private workloads

 

In addition to Microsoft Entra admin center, private network connector is now available on Azure Marketplace and AWS Marketplace in preview. This will allow users to easily deploy a virtual machine with a pre-installed Private Access Connector through a streamlined managed model for Azure and AWS Workloads. The Marketplace offerings automate the installation and registration process, simplifying authentication setup, thus enhancing user experience.

 

Figure 13: Microsoft Entra private network connector on Microsoft Azure Marketplace

 

Figure 14: Microsoft Entra private network connector on AWS Marketplace

 

The Microsoft Entra private network connector is a required software component to enable Microsoft Entra Private Access. It sits alongside customers’ private applications in customer network and is designed to provide secure and convenient access to them from any device and location. It acts as a bridge between Microsoft’s SSE edge and application servers, facilitating the authentication, authorization, and encryption of traffic.

 

Enable edge accelerated Zero Trust private domain name resolution

 

Microsoft Entra Private Access enhances your organization’s domain name resolution (DNS) capabilities and simplifies the process of accessing IP-based app segments and private resources using FQDNs, allowing your users to access private resources with single label names or hostnames without complex configurations. With accelerated DNS at Microsoft’s SSE edge , DNS responses are cached, leading to significantly faster resolution times and enhanced performance. Moreover, the integration of DNS with Conditional Access adds an extra layer of identity-centric security controls, allowing for more granular control over access to private resources.

 

For instance, with Private DNS support, you can provide your domain suffixes to simplify Zero Trust access to private apps using FQDNs, streamlining the connection process to internal resources, while using your existing DNS deployments. This is particularly beneficial in scenarios where your users need to seamlessly access private resources without the need for VPNs or domain-joined devices, while offering a more secure and efficient way to manage access.

 

Simplify access and improve end user experience at a global scale

 

Enhance user productivity by leveraging Microsoft’s vast global edge presence, providing fast and easy access to private apps and resources—located on-premises, on private data centers, and across any cloud. Users benefit from optimized traffic routing through the closest worldwide Point of Presence (PoP), reducing latency for a consistently swift hybrid work experience.

 

Deploy side-by-side with third-party network access solutions

 

A distinctive feature of Microsoft’s SSE solution is its built-in compatibility with third-party network access solutions where it allows you only acquire the traffic you need to send to Microsoft’s SSE edges. Leverage Microsoft and third-party network access solutions in a unified environment to harness a robust set of capabilities from both solutions to accelerate your Zero Trust journey. The flexible deployment options by Microsoft’s SSE solution empowers you with enhanced security and seamless connectivity for optimal user experience.

 

Conclusion

 

Simplifying and securing access for your hybrid workforce is crucial in a landscape where traditional boundaries have dissolved. Enforcing least-privilege access and minimizing reliance on legacy tools like VPNs are essential steps in reducing risk and mitigating sophisticated cyberattacks.

 

Microsoft Entra Private Access helps you secure access to all your private apps and resources for users anywhere with an identity-centric ZTNA solution. It allows you to replace your legacy VPN with ZTNA to securely connect users to any private resource and application without exposing full network access to all resources.

 

The unified approach across identity and network access within Microsoft’s SSE solution signifies a new era of network security. This approach ensures that only authorized users are authenticated, and their devices are compliant before accessing private resources.

 

Learn More

 

To get started, begin a trial to explore Microsoft Entra Private Access general availability. You can also sign up for an Entra suite trial, which includes Microsoft Entra Private Access. For further help contact a Microsoft sales representative and share your feedback to help us make this solution even better.

 

Ashish Jain, Principal Group Product Manager

Abdi Saeedabadi, Senior Product Marketing Manager

 

Read more on this topic

Microsoft Entra Private Access Microsoft Security Service Edge now generally available  Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available Watch Zero Trust spotlight webcast Watch Microsoft Entra Private Access tech accelerator webinar Get started and try Microsoft Entra Private Access Get started and try Microsoft Entra Internet Access Get started and try Entra suite products

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra Internet Access Microsoft Entra News and Insights | Microsoft Security Blog ⁠⁠Microsoft Entra blog | Tech Community ⁠Microsoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community

Spruce Systems

How Personal Data Licenses Can Keep Digital Identity Private

How digital identity can give you total control of your sensitive data.

The world is in the early stages of supplementing old-school paper identity documents with digitally secured identification, licensing, and other credentials. This major technological and infrastructure shift offers big benefits in privacy, security, and convenience for everyday people.

Digital identity has the potential to vastly improve your control over your personal data. Already, many verifiable digital credential (VDC) formats support a feature known as “selective disclosure,” which lets users choose exactly what data fields they hand over during a verification. 

We can go even further to give users broader, long-term control over data sharing of their information, including the ability to closely monitor who has permission to use it–and even to exercise their right to have their data deleted with the tap of a button.

For example, millions of Americans today spend countless hours on phone calls and dust off the fax machines to send information across primary care physicians and healthcare specialists. We could vastly improve the efficiency of electronic health record systems and the patient experience by describing handling rules for a patient’s protected health information (PHI) in human and machine-readable format called a “personal data license,” which is digitally signed by the patient.

A blood test result, for instance, is shared to a patient’s primary care physician (PCP) along with a new personal data license, which describes that the test results may be stored for up to 5 years across all entities and is shareable with their cardiologist without the patient needing to fill out any additional forms.

After 5 years, or when the patient decides to revoke the personal data license with a tap in their app, the data would need to be deleted under the HIPAA privacy framework. The patient could also update the personal data license to allow for other counterparties to also receive the data, or extend the sharing duration. Depending on the reporting requirements described in the license, the patient could also track when, where, and to whom their PHI was shared further.

We call this kind of system “Personal Data Licensing,” and it can work not only with health records but also with digital identity, professional credentials, and anything of value that is paper or plastic today but will be digital tomorrow. Making it a reality will involve technology working hand in hand with privacy-focused public policy.

If you haven't already, subscribe to our blog to stay tuned for part 2 on this topic, where we will describe in detail how it works in practice.

Subscribe Now

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Metadium

Partnership with WEB2X — Web3 Development Services

Partnership with WEB2X — Web3 Development Services Partnership with WEB2X — Web3 Development Services Dear Community, We are excited to announce our partnership with WEB2X, a Web3 development service provider. WEB2X enables companies to easily build Web3 services without the need for developers or complex infrastructure — just by connecting to its API. This collaboration will significantly stre
Partnership with WEB2X — Web3 Development Services

Partnership with WEB2X — Web3 Development Services

Dear Community,

We are excited to announce our partnership with WEB2X, a Web3 development service provider. WEB2X enables companies to easily build Web3 services without the need for developers or complex infrastructure — just by connecting to its API. This collaboration will significantly streamline the transition to Web3 for companies preparing to enter the space.

Metadium, with its superior DID technology, will be included in WEB2X’s infrastructure as part of this partnership. Additionally, WEB2X will provide partial gas fee support to companies that launch their services through the WEB2X platform.

Here are some of the key features of WEB2X:

📍 AUTH: Create and link blockchain accounts with just a passkey while maintaining existing service procedures.

📍 Transaction: Generate and execute blockchain transactions using APIs without additional training in blockchain development languages.

📍 Functions: Easily access data generated on the blockchain and integrate it with existing services through automation.

📍 Oracle: Connect it with blockchain to ensure the reliability of external data such as exchange rates, stock prices, and identity information.

📍 VRF: Provide tamper-proof random data through a verifiable random function (VRF), with all history recorded and verified on the blockchain and accessible via API.

📍 CCMP: Enable cross-chain message exchange, allowing compatibility between different blockchains.

WEB2X officially launched eight product types: tickets, digital photocards, vouchers, season passes, memberships, coupons, commemorative badges, and certificates on 20th August. The platform is continuously updating with more products.

You can now experience Web3 development with WEB2X by using its “Try It” feature, which offers a hands-on trial in under 30 seconds with just a few clicks. For more information, please check the links below:

🔗 Try WEB2X: https://web2x.io/event

🔗 WEB2X Official Site: https://web2x.io/

We at Metadium hope this partnership with WEB2X will be a stepping stone for more builders to join the Metadium ecosystem effortlessly. We look forward to your interest and participation.

Metadium Team Metadium, 웹3 구축서비스 WEB2X와의 파트너쉽

안녕하세요. 메타디움 커뮤니티 여러분!

웹3 서비스 구축 서비스 WEB2X와 메타디움의 파트너쉽을 전하게 되어 기쁘게 생각합니다. WEB2X는 웹3 개발자와 인프라 없이 API연결만으로 웹3 서비스를 구축할 수 있는 웹3 구축 서비스로 많은 기업들과 웹3 사업을 준비하는 기업들의 웹3 전환을 더욱 쉽고 빠르게 해줄 것입니다. 이번 파트너쉽을 통해 WEB2X의 인프라 항목에 DID기술의 우수성을 인정받은 메타디움이 포함될 것이며, WEB2X측에서 WEB2X를 통해 서비스를 오픈하는 기업들에게 가스 수수료의 일부를 지원할 예정입니다.

WEB2X의 주요 기능은 다음과 같습니다.

📍AUTH : 기존 서비스 이용절차를 유지하면서 패스키만으로 블록체인 계정을 생성 및 연동

📍Transaction : 블록체인 개발 언어 등의 추가 학습 과정 없이 API를 이용하여 블록체인 트랜잭션 생성 및 수행

📍Functions : 블록체인과 기존 서비스를 넘나드는 자동화 구현으로, 블록체인상에서 발생한 데이터를 사용하던 기존 서비스를 쉽게 확인 가능

📍Oracle : 환율, 주가, 신원정보 등의 데이터를 블록체인과 연동하여 블록체인상에서 발생하는 외부데이터에 대한 신뢰성을 확보

📍VRF : 모든 이력이 블록체인에 기록 및 검증되고 API로 쉽게 랜덤 추출가능하여 조작불가능한 랜덤 데이터 제공

📍CCMP : 크로스체인 기반의 메시지 교환이 가능하여 이종체인간의 상호호환 가능

WEB2X는 상기의 기능들이 포함된 입장권, 디지털 포토카드, 교환권, 시즌권, 멤버쉽, 쿠폰, 기념 배지, 증명서 8종의 상품과 함께 지난 8월20일 정식오픈하였으며, 현재 다양한 상품을 지속적으로 업데이트할 예정입니다.

현재 클릭 몇번과 30초 이내의 시간에 WEB2X를 통한 WEB3 개발을 체험해 볼 수 있는 ‘체험하기’ 기능을 제공하고 있으니, WEB2X 체험이나 더 많은 정보를 원하시는 분들은 아래의 링크를 참고 부탁드립니다.

🔗 WEB2X 체험하기 : https://web2x.io/event

🔗WEB2X Official Site : https://web2x.io/

메타디움은 이번 WEB2X와의 파트너쉽이 더 많은 빌더들이 부담없이 메타디움 생태계에 진입하는 시작점이 되길 기대하며, 많은 커뮤니티분들의 관심과 이용 부탁드립니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Partnership with WEB2X — Web3 Development Services was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 22. September 2024

KuppingerCole

Flexibility and Adaptability are Key: Identity Fabric 2025

In this episode, Matthias Reinwarth discusses the updates to the Identity Fabric and IAM reference architecture with Dr. Philipp Messerschmidt and Martin Kuppinger. The Identity Fabric is a holistic concept that provides seamless yet secure access to every type of identity for every type of service. The update to the Identity Fabric is necessary to reflect the developments in the IAM world, such a

In this episode, Matthias Reinwarth discusses the updates to the Identity Fabric and IAM reference architecture with Dr. Philipp Messerschmidt and Martin Kuppinger. The Identity Fabric is a holistic concept that provides seamless yet secure access to every type of identity for every type of service. The update to the Identity Fabric is necessary to reflect the developments in the IAM world, such as new trends in authorization and authentication.

The IAM reference architecture provides more detail and functional capabilities for each pillar of IAM. The update also includes the addition of new identity types and the inclusion of architectural concepts like microservice architectures and identity API layers. The Identity Fabric 2025 will be flexible and adaptable to future trends and challenges in IAM.



Friday, 20. September 2024

Spherical Cow Consulting

The Wallets Are Coming – But Are We Ready for What’s Next?

As people like John Bradley and Shannon Roddy have noted in their conference talks earlier this year, the wonderful world of wallets is about to experience some of the growing pains that “traditional” identity federations have been dealing with for decades. When I say traditional, I’m talking about the SAML-based bilateral and multilateral federations that… Continue reading The Wallets Are Coming

As people like John Bradley and Shannon Roddy have noted in their conference talks earlier this year, the wonderful world of wallets is about to experience some of the growing pains that “traditional” identity federations have been dealing with for decades. When I say traditional, I’m talking about the SAML-based bilateral and multilateral federations that have dominated the Research and Education (R&E) space for years. These federations have served as the backbone for secure access in academic and research settings, but here’s the kicker – the lessons learned from that world aren’t making their way into the commercial or enterprise space.

And we should be concerned about that.

Growing Pains, Version 2.0

Why? Because wallets are about to hit the same hurdles that identity federations have been jumping (or tripping over) for a long time. Things like trust frameworks, governance, funding models, and the thorny question of how to manage identity at scale are all about to come knocking. It’s one thing to issue verifiable credentials, but it’s a whole other beast to manage them securely and efficiently across borders, organizations, and systems.

R&E federations have been there, done that, got the t-shirt, and are still trying to figure out if the t-shirt fits. Or if its even still wearable.

The Disconnect Between Worlds

Here’s where things get tricky: despite the similarities, the experience of the R&E sector – where we find the largest and most active identity federations in the world – isn’t translating to the commercial or enterprise space. The enterprise world, which is now buzzing about digital wallets and verifiable credentials, seems to be missing the memo on the challenges of running federations.

What I tend to hear is “eh, that’s SAML-based. SAML is dead, didn’t you know? The R&E experience couldn’t possibly be relevant to my Very Special use case (that happens to look like a thousand other use cases.”

Identity federation isn’t just about the technology. It’s about building trust between organizations, having a solid governance framework, and making sure there’s a sustainable model to keep the lights on. And right now, most commercial ventures diving into wallets are focusing more on the tech (which, let’s face it, is the shiny part) and less on the less-glamorous, but critical, infrastructure.

The R&E Space is Tired – And Underfunded

Now, let’s talk about the state of the R&E world. It’s tired. Federations are not only underfunded but often stretched to their breaking points. Out of the 76 federations we know about globally, maybe five have the funding and resources to do the innovative work that’s needed to stay ahead. The rest? They’re just trying to keep things running, dealing with legacy systems, and clinging to SAML because it’s the devil they know.

But the reality is that while the federations in R&E have the most experience in dealing with identity at scale, they’re not in a position to help the commercial world solve its wallet issues. And frankly, many of them are struggling to stay relevant as the world moves toward OpenID Connect and verifiable credentials.

What Comes Next?

So, what does this mean for the future of wallets and federations? Well, the commercial world is about to discover that managing digital identity is more than just fancy tech. It’s about trust, governance, and sustainability – all things that the R&E space has been grappling with for years.

The challenge is that while the R&E world has valuable lessons to offer, it may not have the capacity or energy to lead the charge into this next era of digital identity. And without a more collaborative approach that brings together the best of both worlds, we could see a lot of wheel reinvention – and a lot of avoidable mistakes.

In short: the wallets are coming, but are we ready for what’s next? That’s the real question.

If you’re interested in learning more about navigating this process or need support in engaging with standards development, don’t hesitate to reach out. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post The Wallets Are Coming – But Are We Ready for What’s Next? appeared first on Spherical Cow Consulting.


KuppingerCole

Future-Proofing Your Identity Systems: What SAP’s IDM Sunset Means for Your Organization

by Matthias Reinwarth With SAP announcing the end of maintenance for its Identity Management (IDM) system by 2027 and extending support through 2030, organizations using on-premises identity governance systems face a critical decision. While this may seem like ample time, replacing an Identity Governance and Administration (IGA) solution is a complex and often lengthy process that can take severa

by Matthias Reinwarth

With SAP announcing the end of maintenance for its Identity Management (IDM) system by 2027 and extending support through 2030, organizations using on-premises identity governance systems face a critical decision. While this may seem like ample time, replacing an Identity Governance and Administration (IGA) solution is a complex and often lengthy process that can take several years to complete. Organizations must begin planning now to avoid rushed decisions and potential disruptions.

A Complex Transition Ahead

Replacing an IGA system is far more than a simple technical upgrade. These systems are deeply embedded in user lifecycle management, provisioning, and access governance, and swapping them out can be challenging. On average, replacing an IGA system takes at least three years, to the need for thorough planning, process alignment, and system integration. The decisions made today will affect organizations for decades to come, making it critical to consider future requirements rather than merely replicating existing systems with newer tools.

Rethinking IAM for the Future

The end of SAP’s IDM system provides an opportunity to reimagine how Identity and Access Management (IAM) should be designed in the future. Rather than focusing on a like-for-like replacement, organizations should take a strategic approach, considering how identity governance will evolve in a hybrid IT environment. Modular, flexible architectures – especially those based on the KuppingerCole Identity Fabric - can provide the adaptability needed to address evolving security, governance, and access management challenges in hybrid environments.

Regulatory Pressure and Hybrid Complexity

The regulatory environment around identity management has become increasingly complex, and organizations must now comply with stricter access governance requirements. Hybrid IT setups, combining on-premises systems with cloud services, complicate the landscape. Many organizations already run multiple identity management systems - one for on-premises applications and another for cloud services - leading to integration headaches. However, this challenge also presents an opportunity to streamline identity governance processes and modernize outdated systems.

Efficiency Through Automation

One key lesson from traditional IGA implementations is the need for greater automation. Manual processes, such as cumbersome recertification workflows and role management, often reduce efficiency and increase the risk of errors. Modern IGA solutions should prioritize automation to handle provisioning and governance tasks more effectively. Over-customization has been a frequent issue with legacy IGA systems, leading to complex environments that are difficult to update and maintain. Reducing customization in favor of standardized, scalable solutions can simplify future upgrades and lower long-term maintenance costs.

Exploring Alternatives: Cloud and Hybrid Approaches

With SAP shifting its focus toward cloud-based identity services, organizations must evaluate the potential of cloud IGA solutions. Both SAP Cloud Identity Services and Microsoft Entra ID Governance services might offer viable alternatives to on-premises IDM systems, but a one-size-fits-all approach is rarely the answer. Each organization has unique needs based on factors like regulatory requirements, business size, and complexity. Conducting a comprehensive requirements analysis is essential before selecting a tool, ensuring it aligns with long-term strategic goals.

Holistic Planning for a Future-Ready IGA

The replacement of an IGA system isn't just a technical exercise. It requires a holistic rethinking of processes such as policy enforcement, role models, and integration with risk management solutions. The cost of such projects goes beyond licensing fees, as implementation can be six to ten times higher than subscription costs alone. Therefore, a thorough approach to process reviews, tool selection, and planning will pay dividends, reducing the risk of costly rework or operational inefficiencies.

The Financial Impact of IGA System Replacement

Replacing an IGA system is a significant financial commitment, especially with the shift toward subscription-based models. However, organizations that carefully plan and choose the right solutions will see long-term benefits in terms of compliance, operational efficiency, and security. Investing in the right identity governance infrastructure now will ensure that future regulatory and technological challenges are met with agility.

Time to Act

The end-of-life announcement for SAP's IDM system should serve as a wake-up call for organizations still reliant on traditional on-premises identity systems. The clock is ticking, and the time to start planning is now. By conducting a thorough analysis of current and future requirements, avoiding over-customization, and embracing automation, organizations can ensure they are well-prepared for the evolving world of identity governance and access management. The future of identity governance lies in flexible, scalable solutions that integrate seamlessly with hybrid IT environments - don't wait until 2027 to start the journey.


Northern Block

Announcing Our Strategic Partnership with Digital Governance Institute

Northern Block partners with Digital Governance Institute (DGI) to deliver joint governance consulting services and Trust Registry Infrastructure. The post Announcing Our Strategic Partnership with Digital Governance Institute appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Announcing Our Strategic Partnership with Digital Governance Institute appeared fi

We are excited to announce that Northern Block has entered into a strategic partnership with Digital Governance Institute (DGI) to offer joint governance consulting services and product solutions to the digital trust ecosystem. Together, we aim to combine Northern Block’s Trust Registry Infrastructure as a Service (IaaS) with DGI’s renowned ecosystem network governance services, creating a solution that provides both robust technical infrastructure and transparent, accountable governance frameworks.

Why We’re Doing This

The demand for secure, verifiable digital trust infrastructures is growing exponentially. Northern Block recognized early on that trust registries play a crucial role not only as utilities that make claims about ecosystem participants publicly available but also as market makers. Trust registries provide a platform for entities to verify identity and authority information, adding integrity during transactions. A trust registry is only valuable if the data inside it has integrity, and this integrity is primarily achieved through strong governance. Without clear governance and conformance programs built on it, the system risks becoming a “garbage in, garbage out” scenario. Our partnership with DGI ensures that our governance module and trust registry administration processes are aligned with the highest standards, safeguarding the integrity of the data.

DGI brings unmatched governance expertise to the table, having authored the only fully-formed governance toolkit for the Trust Over IP Foundation deployed in high-assurance environments such as Bhutan’s National Digital Identity Ecosystem and the Global Legal Entity Identifier Foundation (GLEIF). Their work on governance and conformance aligns perfectly with Northern Block’s mission to deliver high-assurance digital trust ecosystems, supported by open standards and transparent governance.

Value for the Industry

By combining forces, we are creating a total governance solution for the industry—one that ensures trust registries are not only technically sound but also governed by strong, generally accepted standards. This collaboration will help digital identity and verifiable credential ecosystems establish integrity, trust, and transparency, driving adoption in sectors that require high levels of assurance and governance.

Our joint offering adheres to globally recognized standards, with both Northern Block and DGI being leaders in the Trust over IP (ToIP) Foundation, contributing to technical and governance-related work respectively. Our work puts into practice key standards, including the Trust Registry Query Protocol and the Governance Framework Metamodel. The Trust Registry Query Protocol allows any entity to interact with a trust registry by asking a simple question: “Does Entity X have Authorization Y, in the context of Ecosystem Governance Framework Z?” Meanwhile, the Governance Framework Metamodel and toolkit help establish and implement risk-based governance for ecosystems, having already been successfully deployed in major initiatives. 

This partnership brings together the governance and accountability of service providers that conform to governing authority requirements and the technical assurance to rely upon its scheme. This ensures our solution remains interoperable and scalable, allowing clients to leverage the best available technology and governance practices.

For our active customers, this partnership provides significant value. Credential issuers will benefit from increased robustness around their ecosystem credentials, enhancing their value both within and outside their respective ecosystems. Credential verifiers will gain greater confidence when interacting with holders, knowing they are practicing data minimization and requesting only the necessary data proofs, while also being able to accept credentials from other ecosystems. Credential holders will be better equipped to authenticate and verify any public entity they interact with, improving trust and security for them.

This value extends beyond digital credential use cases. Trust registries also provide significant benefits in other types of digital interactions, such as within browsers (e.g., trusted web domains), email clients (e.g., trusted emails), platforms (e.g., trusted content) or APIs (e.g., trusted access). However, for these registries to deliver real value, they must be backed by strong governance and robust processes—this is the core focus of our collaboration with DGI.

About Northern Block and DGI

Northern Block, founded in 2017 with offices in Toronto, Gatineau, and Amsterdam, has been a leader in digital trust solutions, developing a Trust Registry IaaS that supports ecosystems across various industries. We are passionate about ensuring trust in digital interactions and are working towards building a safer, more reliable digital landscape.

Digital Governance Institute (DGI), based in Bellevue, Washington, is led by Scott Perry, an expert in digital governance frameworks. DGI has provided governance and conformance services to a range of ecosystems, ensuring their solutions meet the highest standards of integrity and accountability.

For more information, please contact:

Northern Block: Website: www.northernblock.io Email: Mathieu Glaude, Founder & CEO – mathieu@northernblock.io Digital Governance Institute: Website: www.digitalgovernanceinstitute.com Email: Scott Perry, Founder & CEO – info@digitalgovernanceinstitute.com

We look forward to bringing this powerful, combined offering to the market and driving the next wave of trusted digital trust solutions.

The post Announcing Our Strategic Partnership with Digital Governance Institute appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Announcing Our Strategic Partnership with Digital Governance Institute appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Spherical Cow Consulting

From Concept to Consensus: Developing Internet Standards

I love the whole Internet standards development process and tend to collect standards development organization (SDO) meeting badges like other people collect Pokémon. (Don’t judge; there are stranger hobbies out there. Granted, none come to mind right now, but I’m sure they exist.) Having been an active part of cross-organization collaborations since around 2001, the… Continue reading From Concept

I love the whole Internet standards development process and tend to collect standards development organization (SDO) meeting badges like other people collect Pokémon. (Don’t judge; there are stranger hobbies out there. Granted, none come to mind right now, but I’m sure they exist.) Having been an active part of cross-organization collaborations since around 2001, the process of developing broadly applicable standards is natural to me and deeply mystifying to anyone outside the standards development space.

The creation of an Internet standard is a journey that involves personal autonomy, teamwork, and the voices of countless stakeholders. It requires a mix of individual expertise and collective effort. Arguments, debates, concessions, compromises, and pragmatism are all characteristics of a good standards process.

The Spark of an Idea

How does a standard even get started? Every single one begins with a problem needing to be solved, sparked by a challenge or a gap in the current technology landscape. This is where personal autonomy plays its first crucial role. An individual or a small group identifies a problem and starts brainstorming a solution. This initial phase is all about creativity and innovation, with minimal constraints. It is the part the individual loves most while their management wonders when the problem will actually be _solved_.

But to move from an idea to something actionable, the next step is to find like-minded individuals who share the same problems. While technology is evolving faster than ever, it is rarely entirely new. That means one or more SDOs are almost certainly working in the space. That’s your first stop to finding others who will likely see the value in the idea and are willing to invest time and effort in developing it further.

Building Consensus: The Heart of Internet Standards Development

Of course, finding a home for an idea to turn into an Internet standard is critical; it’s also the point that people new to the idea of standards start to get overwhelmed. Depending on the type of SDO involved—whether it’s a treaty-based, industry-based, or de facto community-based organization—the process towards standardization will vary. So. Much. Process. That said, though, the core principle remains the same: consensus.

Consensus is the lifeblood of standards. It’s the point where autonomy meets teamwork. Each participant brings a unique perspective, whether they are representing a government, a corporation, or their own independent expertise. The goal is to hammer out the technical details in a way that works for everyone—or at least for most stakeholders involved.

At this stage, the process can become incredibly challenging. It’s not just about getting the technical details right; it’s about navigating the complex web of competing interests. For example, treaty-based SDOs often involve nation-state politics, where technical merit might take a backseat to broader geopolitical concerns. Meanwhile, industry-based SDOs have to balance the needs of various commercial entities, each with its own agenda.

For those engineers who want to focus purely on the tech, the non-technical skills required to move an idea forward can be excruciating to develop.

The Role of Stakeholder Engagement

And speaking of non-technical skills, the people who came up with the initial idea cannot be the only people who can offer input into the standard. There are _always_ additional stakeholders that need to be brought in. If people and organizations don’t have a say, they may not adopt the standard. A standard that isn’t adopted is ultimately a waste of time and energy. So, all this means that the process must be open enough to allow for broad participation but structured enough to keep things moving forward.

Sometimes, the standard might be developed within a small, focused community before it’s presented to a broader audience. This is often the case with de facto or community-based SDOs, where the initial work is done by a committed group of experts who are passionate about the topic. It’s my favorite way of doing things. These standards can gain significant influence if they garner widespread adoption, often transitioning into more formalized industry-based standards over time.

Publication and Beyond: The Long Tail of Internet Standards Work

Through dangers untold and hardships unnumbered (how to say you’re Gen X without saying you’re Gen X) or, more to the point, after much discussion, negotiation, and revision, the standard is finally ready for publication. Enter in MORE PROCESS. Each SDO will have a process for its participants or members to indicate support for the proposed standard to be published by that SDO.

If you’ve done your work and engaged a broad swath of stakeholders, then the approval part of the process will go much more smoothly. Standards need to be implemented, which often means dealing with feedback from those putting them into practice in the real world.

This can be frustrating for those not involved in the early stages. It’s not uncommon to hear complaints that a standard doesn’t quite fit the needs of a particular organization or use case. Hearing that at the point the initiating working group thinks it’s all done is, to say the least, wildly frustrating. And I mean frustrating for both the group that worked on the standard and the organizations who are only just hearing about it at the end of the game. Still, even late in the game, having that opportunity to catch any missing bits is important. To paraphrase an old adage: The best time to get involved in standards development was years ago, but the second best time is now.

Why You Should Care

Now, on to why you should care about the standards development process: The standards that come out of these efforts have a direct impact on how you (as vendors, enterprises, humans, etc.) operate in the digital world. For technologists, these standards shape the tools and protocols that we rely on every day. And the more diverse the input into these standards, the better they will be at addressing the needs of the global community.

By engaging in the standards development process, you not only contribute to the betterment of the industry but also ensure that your organization’s needs are met. If you’re not ready to dive in at the deep end, there are many ways to get involved. Start by participating in a community group or contributing to the early thoughts through organizations like IDPro® or conferences like the Internet Identity Workshop (IIW).

Wrap-Up: Your Role in Shaping the Future

The journey from an idea to a published standard is long and complex, but it’s also incredibly rewarding. It’s rewarding personally because while it requires the input of many, each person brings their own autonomy and expertise to the table. And from an organization’s perspective, it’s where you truly demonstrate your thought leadership on how the technology in your field should evolve. And if you’re worried you’re “not technical enough,” I promise you that there’s a role for you to play in shaping the future of Internet standards.

So, if you’ve ever found yourself frustrated by a standard that doesn’t quite meet your needs, consider this: You have the power to change that. Get involved, make your voice heard, and be part of the team that’s building the digital world of tomorrow.

If you’re interested in learning more about navigating this process or need support in engaging with standards development, don’t hesitate to reach out. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post From Concept to Consensus: Developing Internet Standards appeared first on Spherical Cow Consulting.


Ocean Protocol

Crypto Model Factoring: Data Challenge Podium

In collaboration with Numerai, we challenged data scientists to create custom datasets and built multi-factor models to explain cryptocurrency price variance. The Crypto Factor Modeling Data Challenge invited participants to analyze cryptocurrencies by developing models that explain price fluctuations. Participants gathered data from sources such as Tardis, Kaiko, CCXT, and Uniswap to create cust

In collaboration with Numerai, we challenged data scientists to create custom datasets and built multi-factor models to explain cryptocurrency price variance.

The Crypto Factor Modeling Data Challenge invited participants to analyze cryptocurrencies by developing models that explain price fluctuations. Participants gathered data from sources such as Tardis, Kaiko, CCXT, and Uniswap to create custom datasets. They then used these datasets to build multi-factor risk models identifying the factors driving cryptocurrency prices.

Numerai’s objective with this competition was to lay the groundwork for understanding factors in the cryptocurrency markets, aiming to determine whether crypto factor investing or risk models could even be feasible. The competition emphasized several key elements in the reports: creativity, methodology, statistical rigor, explanatory power, conclusiveness, and reproducibility.

The winning submissions demonstrated one or more of these qualities by crafting innovative features and factors, applying statistical techniques to uncover their relationship with cryptocurrency price variance, and proving the predictive importance of these factors.

Additionally, the winners openly shared their code and adhered to the minimum statistical rigor required for peer review. Some notable findings revealed that factors such as volatility, momentum, value, and sentiment were predictive of crypto price movements. While these findings may align with factors seen in traditional stock markets, conducting this research openly and transparently is essential.

Numerai has fostered a community of expert data scientists who can build upon these reports, further enhancing the predictive capabilities in crypto markets. These reports could serve as a starting point for crypto hedge funds interested in developing risk models or exploring factor-based investing in cryptocurrency.

Top submissions “Crypto Factor Modeling Data Challenge” 1st Place: NeuralNinja

The report by NeuralNinja details a structured methodology for developing multi-factor risk models to predict cryptocurrency price movements. It combines a wide range of data sources, including macroeconomic indicators from the World Bank, historical cryptocurrency prices obtained through the CCXT library, and market sentiment data from Google Trends. The report emphasizes the importance of data quality, meticulously cleaning and merging datasets to create a robust foundation for analysis. Key technical indicators such as RSI, MACD, and Bollinger Bands are calculated to enhance the dataset’s predictive power. The report also highlights the integration of features from the Numerai platform, adding depth to the analysis. This comprehensive approach ensures a thorough understanding of cryptocurrency price movements.

In the feature engineering phase, the report outlines techniques to create new variables that capture temporal dependencies and market dynamics. Lagged features of key indicators are introduced to account for past price movements while rolling statistics provide insights into longer-term trends. The report also discusses the calculation of momentum and volatility measures and interaction terms that explore combined effects on price movements. The final dataset is comprehensive, integrating macroeconomic factors, market data, technical indicators, and sentiment measures, setting the stage for developing and evaluating effective multi-factor risk models to understand and predict cryptocurrency price fluctuations.

2nd Place: Ahan

The report A Detailed Case Study on Crypto Multi-factor Risk Analysis by Ahan investigates cryptocurrency investment strategies through a multi-factor framework traditionally used in equity markets. It highlights the rapid growth of the cryptocurrency market, which reached a capitalization of approximately $1,676 billion in 2023, with Bitcoin and Ethereum being the dominant assets. The study employs various financial models, including the Fama-MacBeth regression, Fama-French models, and machine learning techniques, to analyze the predictive capabilities of factors such as market, size, value, and momentum. It emphasizes the need for a tailored approach to understand cryptocurrency returns and risks due to their unique characteristics and high volatility compared to traditional assets.

Key findings reveal that traditional models like CAPM are less effective in explaining cryptocurrency returns, while modified Fama-French models incorporating cryptocurrency-specific factors provide better insights. The analysis indicates that smaller cryptocurrencies often outperform larger ones, mirroring trends in equity markets. Additionally, investor sentiment and social media influences significantly impact cryptocurrency pricing. The research suggests that systematic inconsistencies in the market could allow for return predictability, urging a reevaluation of conventional investment evaluation methods to accommodate the distinct dynamics of cryptocurrencies.

3rd Place: Malihe

The report submitted by Malihe focused on a multi-factor model for forecasting cryptocurrency returns, analyzing nearly 120 cryptocurrencies by integrating various market, economic, and social media factors. Key market factors include momentum, market cap, liquidity, and volatility, while economic indicators such as the Federal Funds Effective Rate and inflation rates were also examined. Data was sourced from CCXT, Coingecko, Google Trends, and FRED. The study found strong correlations between returns and the High Minus Low (HML) factor, indicating its significant influence on performance. However, economic factors showed weak correlations when analyzed in isolation.

The modeling results revealed that HML and momentum are significant predictors of returns, while other factors like market cap and volatility did not demonstrate substantial effects. The analysis also highlighted that higher liquidity is generally associated with better market performance. Interestingly, Google Trends data was included but showed weak correlations with cryptocurrency returns, suggesting it may not be a reliable standalone predictor. Overall, the findings emphasize the importance of understanding market dynamics, particularly momentum and value factors, to inform investment strategies in the volatile cryptocurrency landscape.

Interesting Facts

Rapid Market Growth: Since the launch of Bitcoin in 2008, the cryptocurrency market has exploded, reaching a capitalization of approximately $1,676 billion in 2023, with Bitcoin and Ethereum representing 41.8% and 18.1% of this market, respectively.

High Volatility: Cryptocurrencies are known for their extreme volatility, with most exhibiting beta values greater than 1. For instance, Dogecoin has a beta of approximately 3.045, indicating it is over three times as volatile as the S&P stock market.

Emergence of Specialized Funds: Over 170 hedge funds focused on cryptocurrencies have emerged since 2017, highlighting the growing institutional interest in crypto trading and hedging strategies.

Predictive Models: Traditional models like CAPM are less effective for cryptocurrencies, while modified Fama-French models that include cryptocurrency-specific factors like size and momentum provide better insights into returns.

Investor Sentiment Impact: Social media sentiment significantly influences cryptocurrency pricing, indicating that market psychology plays a crucial role in determining returns.

2024 Championship

The challenges offer prize pools from $10,000 to $20,000, distributed among the top 10 participants. Our points system for the championship allocates between 100 and 200 points to the top 10 finishers in each challenge, with each point valued at $100. Participants accumulate these points toward the 2024 Championship. Last year, the top 10 champions received an additional $10 for each point they had earned.

2024 Championship standings prior to the Crypto Model Factoring challenge

Additionally, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists maintain their intellectual property rights while we provide support in monetizing their innovations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

Crypto Model Factoring: Data Challenge Podium was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance

The post How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance appeared first on Tokeny.

Product Focus

How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance

This content is taken from the monthly Product Focus newsletter in September 2024.

In previous product newsletters, we focused mainly on the technical features we’ve developed. In this edition, we’d like to highlight how our onchain operating system is utilized by one of the most crucial stakeholders in tokenized funds: fund administrators.

Fund administrators aiming to manage tokenized funds need onchain tools to handle onboarding, compliance, asset management, and secondary market operations seamlessly. Tokeny provides a complete suite of solutions, offering the necessary tools for investor onboarding, token issuance, compliance rules management, full lifecycle fund servicing, and secondary market functionality—ensuring a smooth transition to onchain fund management.

How Our Products Meet Fund Adminstrators’ Needs at Each Stage:

Onboarding: Fund administrators require a streamlined onboarding process with KYC checks and secure payment collection. Our Investor App, part of the Tokeny Platform, offers a comprehensive solution for collecting investor information, conducting digital verification, and supporting payments in a fully integrated and digital manner.

Onboarding Needs Solutions Allow easy browser of offers List asset offers and details of assets Collect investor info Customizable form fields and digital workflows Digital verification and signing Integrated digital verification and signing tools (e.g. SumSub, DocuSign) Automate calculation Automated calculation of exchange rates and fees Collect payments Support multi-currencies and all payment methods from fiat to onchain cash and crypto

Issuance: Fund administrators need to represent assets onchain in a compliant manner. Tokeny Platform (turnkey solution) or T-REX Engine (APIs) allows them to tokenize assets on any preferred blockchain with upgradable smart contracts.

Issuance Needs Solutions Flexible blockchain support Support any EVM chain with a switch chain feature Represent assets onchain Token detail setup and one-click token deployment Compliance setup Set investor rules and transfer restrictions Upgradability Upgradable smart contracts

Servicing: Managing onchain funds requires full lifecycle servicing, from compliance to investor relations. T-REX Platform or T-REX Engine enables managing KYC/AML, investor data, cap tables, and token controls, automating many operational tasks while keeping real-time records of ownership.

Servicing Needs Solutions KYC/AML Management Onchain Qualification of investors and automated onchain compliance validation Private Market Management Order management for subscription and redemption Tokenized Assets Management Selective freeze of tokenized assets, suspension of all tokenized assets, mandated transfers, redemption, or token recovery Data Management Manage offering details, identities, and investor details Cap Table Management Real-time ownership records, check positions at any time Investor Relations Built-in email notification tool

Secondary Market: Handling secondary market operations is key to increasing liquidity. Tokeny Platform or T-REX Engine empowers fund administrators to control distribution channels, approve peer-to-peer trades, and verify deposit wallets for compliant trading.

Secondary Market Needs Solutions Automate operations Advanced transfer functions (DvP, etc.) Distribution channel control Authorize distribution channels Secondary transfers Authorize and approve peer-to-peer trades and trading intention offers.

Tokeny’s solutions empower fund administrators to reduce operational friction and ensure full control over compliance, distribution, and data management, all in real-time.

We are excited to work with leading fund administrators globally and look forward to helping more fund administrators accelerate the adoption of onchain finance.

Xavi Aznal Head of Product Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance 20 September 2024 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance appeared first on Tokeny.


KuppingerCole

Nov 21, 2024: Passkeys in a Zero Trust World – Blessing or Curse?

In the modern digital landscape, organizations are confronted with growing cybersecurity challenges that demand stronger authentication methods. Zero Trust frameworks have become essential for bolstering security postures, placing a significant emphasis on identity verification. As traditional passwords become more vulnerable, passkeys are gaining traction for their phishing-resistant capabilities
In the modern digital landscape, organizations are confronted with growing cybersecurity challenges that demand stronger authentication methods. Zero Trust frameworks have become essential for bolstering security postures, placing a significant emphasis on identity verification. As traditional passwords become more vulnerable, passkeys are gaining traction for their phishing-resistant capabilities and their potential to transform authentication within Zero Trust environments.

Metadium

Termination of Keepin Service

Dear Community, We want to inform you that the Keepin app will officially terminate all services as of September 30, 2024. We want to express our gratitude to everyone who has used the Keepin app during this time. Your support has been invaluable. Starting October 1, all support for the service, including updates, new downloads, and operational support, will end. We truly appreciate your s

Dear Community,

We want to inform you that the Keepin app will officially terminate all services as of September 30, 2024. We want to express our gratitude to everyone who has used the Keepin app during this time. Your support has been invaluable.

Starting October 1, all support for the service, including updates, new downloads, and operational support, will end.

We truly appreciate your support and understanding and apologize for any inconvenience caused by the service’s discontinuation.

Thank you.

안녕하세요. 메타디움 팀입니다.

키핀 앱은 2024년 9월 30일을 기점으로 모든 서비스를 공식적으로 종료할 예정임을 알려드립니다.

그동안 키핀 앱을 이용해 주신 모든 분께 진심으로 감사드립니다.

10월 1일부터 업데이트, 신규 다운로드, 운영 지원 등 모든 서비스에 대한 지원이 종료됩니다.

그동안 키핀 앱을 이용해 주신 여러분께 진심으로 감사드리며, 더 이상 서비스를 지속하지 못한 점 양해 부탁드립니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Termination of Keepin Service was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 19. September 2024

KuppingerCole

IGA als Herzstück eines jeden Security-Transformations-Programms

In der heutigen digitalen Landschaft stehen Unternehmen vor wachsenden Herausforderungen im Bereich der Cybersicherheit. Angriffe auf digitale Identitäten nehmen zu und sind oft erfolgreich, wie jüngste Vorfälle zeigen. Gleichzeitig ist die digitale Identität eine Schlüsselkomponente für Zero-Trust-Architekturen, die den kontrollierten Zugriff auf Unternehmensdaten ermöglichen. Identity and Admi

In der heutigen digitalen Landschaft stehen Unternehmen vor wachsenden Herausforderungen im Bereich der Cybersicherheit. Angriffe auf digitale Identitäten nehmen zu und sind oft erfolgreich, wie jüngste Vorfälle zeigen. Gleichzeitig ist die digitale Identität eine Schlüsselkomponente für Zero-Trust-Architekturen, die den kontrollierten Zugriff auf Unternehmensdaten ermöglichen.

Identity and Administration (IGA) spielt eine zentrale Rolle bei der Bewältigung dieser Herausforderungen. Moderne IGA-Lösungen bieten umfassende Funktionen zur Verwaltung digitaler Identitäten, von der Automatisierung von Zugriffsrechten bis hin zur Erkennung von Anomalien. Diese Fähigkeiten sind entscheidend für die Implementierung robuster Sicherheitsstrategien in zunehmend komplexen IT-Umgebungen.

Dr. Phillip Messerschmidt, Lead Advisor bei KuppingerCole, wird aktuelle Autorisierungstrends rund um die IGA mit einem starken Fokus auf Autorisierungsmodelle und strategische Überlegungen zur Realisierung ihrer Vorteile untersuchen. Anhand einiger Beispiele wird er Anwendungsfälle der verschiedenen Autorisierungsmodelle erläutern und aufzeigen, wie sie Unternehmen bei der Umsetzung ihrer IAM- und Cybersicherheitsstrategie unterstützen.

Klaus Hild, Identity Strategist bei SailPoint, und Moritz Anders, Partner für Digital Identity bei PwC, werden praktische Einblicke in die Implementierung von IGA-Lösungen geben. Sie werden erläutern, wie SailPoint's Identity Security Platform die verschiedenen IGA-Fähigkeiten unterstützt und wie PwC's Capability-Modell für Identity & Access Management große IAM-Transformationen ermöglicht.




auth0

Protecting REST APIs Behind Amazon API Gateway Using Okta

Learn how to set up an Amazon API Gateway and secure the REST API with Okta Customer Identity Cloud (CIC) Adaptive MFA Risk Score
Learn how to set up an Amazon API Gateway and secure the REST API with Okta Customer Identity Cloud (CIC) Adaptive MFA Risk Score

Elliptic

Crypto regulatory affairs: UAE takes steps to bolster its crypto regulatory framework

The United Arab Emirates continues to take important steps to cement its status as a hub for well-regulated cryptoasset activity. 

The United Arab Emirates continues to take important steps to cement its status as a hub for well-regulated cryptoasset activity. 


KuppingerCole

Open Multi-Cloud, Intelligent Business Applications, and Security Robots

by Alexei Balaganski Last week, I had an opportunity to attend CloudWorld 2024. Oracle uses its flagship event to unveil the most important announcements of the year, and after the break caused by the Covid pandemic, it was moved from San Francisco to Las Vegas. To be honest, I’m not a fan of the city’s scorching heat (it was over 40 degrees C outside at times). Thankfully, the agenda created by

by Alexei Balaganski

Last week, I had an opportunity to attend CloudWorld 2024. Oracle uses its flagship event to unveil the most important announcements of the year, and after the break caused by the Covid pandemic, it was moved from San Francisco to Las Vegas. To be honest, I’m not a fan of the city’s scorching heat (it was over 40 degrees C outside at times). Thankfully, the agenda created by the company’s analyst relations team was so packed that I spent most of the four days inside the air-conditioned venue, attending keynotes and sessions, talking to Oracle’s executives and customers, and, of course, networking with other analysts. Here are some of my takeaways from the event.

The Open Multi-cloud Era

The beginning of this era has been announced by Larry Ellison in his keynote, when it was unveiled that Oracle now has strategic partnerships with each of the Big Three cloud providers to make Oracle Autonomous Database available directly in their respective infrastructures, with full feature parity with OCI’s own services and without the latency issues of traditional multi-cloud deployments. What it essentially means that Oracle’s engineers deploy the company’s Oracle Cloud Infrastructure, specifically its Exadata platform and Oracle databases, directly in Microsoft’s, Google’s, and AWS’ own cloud datacenters and make it available to their customers through the native user interface, billing, and technical support channels of each provider.

Now, some purists might argue that this architecture is not really multi-cloud, since everything is contained within the infrastructure of each provider, and data does not flow between clouds (which, incidentally, is great news for AWS customers, since they don’t need to worry about egress fees). However, what’s important for customers is that they can now combine the best native services of each provider with all the latest features of the database they know and love for decades.

There is something ironic about Oracle going full circle—from the company’s roots in offering “a database that runs everywhere” on-premises to the new cloud model introducing a whole zoo of partially incompatible database services across providers to finally bringing the same “everywhere” promise back to life at an entirely different scale.

On a somewhat related note—the concept of “private cloud” is also undergoing a profound change. Oracle is known for offering a broad range of cloud deployment options to their customers—calling this flexible portfolio their “Distributed Cloud”. This year, the company announced the new OCI Dedicated Region25 that will be available in a smaller, scalable size starting at only three racks and rapidly deployable within weeks. It has a 75% smaller launch footprint and simplified datacenter requirements and supports OCI’s 150+ AI and cloud services. What used to be possible only for large enterprises is now much more affordable.

AI Transforms Everything

Of course, artificial intelligence was another major topic during the conference—for both the company and its customers and partners. And Oracle had tons of announcements of new AI features and capabilities throughout their entire portfolio. At the infrastructure level, for example, OCI Supercluster, announced for 2025, will be the largest ever hardware platform, labeled as a Zettascale supercomputer, powered by over 100K Nvidia GPUs to run the most demanding AI workloads.

Both Oracle Database 23ai and HeatWave offer a multitude of built-in AI capabilities, from somewhat overlooked but still extremely useful machine learning algorithms to vector search that brings enterprise data to generative AI models. Needless to say, the big differentiator for both solutions, as opposed to specialized vector databases, is the ability to keep data in multiple formats (relational, graph, JSON, and now vector) in the same database and to run complex hybrid queries across them. We had an opportunity to hear from customers already using these capabilities in production, and the general agreement was that it just worked without any additional learning curve.

All Oracle’s industry apps and business analytics solutions have received major new AI-powered capabilities as well. Curiously, even Oracle APEX, the company’s “hidden gem”, the low-code application development platform, has received a major boost from the AI hype. For quite a while already, the APEX team has been working on a new programming language, more abstract and human-readable, to replace its original PL/SQL and make APEX much more compatible with modern CI/CD pipelines. However, this development, in the form of an AI Assistant, also enabled them to make APEX apps generatable using a conversational approach. Apparently, using this technology internally already allows Oracle to develop their business apps 10 times faster.

One major concern I was happy to hear addressed during the event is what I call “AI agility.” Just like with cryptography, where quantum computers can potentially make existing algorithms irrelevant overnight, the current state of the AI market is also extremely unpredictable. Who knows which vendors, models, and algorithms will still survive in the next five years? Any sensible organization should be prepared to be agile with their AI deployments and ready to address these risks.

Securing Apps with Data Robots

Apparently, Larry Ellison loves robots. At least that was his term for Oracle’s approach towards security. To secure sensitive data in the cloud at a massive scale, humans are no longer good enough. Automation is the only viable approach, and Oracle has a plan for that, too. Needless to say, their robot DBA is, of course, the Oracle Autonomous Database, and by next year, they promise to migrate all their apps and services to it, eliminating the human factor from database operations and security. They also plan to get rid of password-based authentication completely, which is definitely a welcome if somewhat bold promise.

Zero Trust Packet Routing (ZPR) is another promising development, introducing the identity component and security attributes to the network fabric of the Oracle Cloud. This approach combines Zero Trust with policy-based access controls to ensure that security policies can be applied independently of the underlying network configuration. This technology is currently in early access.

My biggest takeaway from CloudWorld 2024 is that Oracle is not just delivering a long list of new features for its customers—all of these developments are incorporated into the company’s entire product and service portfolio. In the end, Oracle is not afraid of eating its own dog food while customers gain value from Oracle’s integrated, full-stack approach.


Ocean Protocol

DF107 Completes and DF108 Launches

Predictoor DF107 rewards available. DF108 runs Sept 19 — Sept 26, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 107 (DF107) has completed. DF108 is live today, Sept 19. It concludes on September 26. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&n
Predictoor DF107 rewards available. DF108 runs Sept 19 — Sept 26, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 107 (DF107) has completed.

DF108 is live today, Sept 19. It concludes on September 26. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF108 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF108

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF107 Completes and DF108 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 18. September 2024

Microsoft Entra (Azure AD) Blog

Microsoft Entra Internet Access now generally available

With the rise of hybrid work, identity and network security professionals are now at the forefront of protecting their organizations. Traditional network security tools fall short in meeting the integration, complexity, and scale requirements of anywhere access, leaving organizations exposed to security risks and poor user experiences. To address this, network security and identity must function a

With the rise of hybrid work, identity and network security professionals are now at the forefront of protecting their organizations. Traditional network security tools fall short in meeting the integration, complexity, and scale requirements of anywhere access, leaving organizations exposed to security risks and poor user experiences. To address this, network security and identity must function as a unified force in defense. Only when identity and network controls deeply integrate into secure access, can we fully deliver on the core Zero Trust principles, where trust is never implicit and access is granted on a need-to-know and least-privileged basis across all users, devices, and applications.

 

Microsoft Entra Internet Access

 

On July 11th, 2024, we announced general availability (GA) of Microsoft Entra Suite, which includes Microsoft Entra Internet Access, part of the Security Service Edge (SSE) solution. Internet Access secures access to all internet and SaaS applications and resources with an identity-centric secure web gateway (SWG) solution, unifying identity and network access controls through a single Zero Trust policy engine to close security gaps and minimize the risk of cyberthreats. Our solution integrates seamlessly with Microsoft Entra ID, eliminating the need to manage users, groups, and apps in multiple locations. It protects users, devices, and resources with capabilities such as universal Conditional Access, context aware network security, and web content filtering, so you no longer need to manage multiple disconnected network security tools.

 

Figure 1: Secure access to all internet and SaaS applications and resources, with an identity-centric SWG.

 

 

Unified identity and network security

 

Our deep integration with Entra ID enables Conditional Access, and later continuous access evaluation (CAE), to be extended to any external destination, internet resource, and cloud application, even if they’re not integrated or federated with Entra ID. This integration with Conditional Access enables you to enforce granular controls, leveraging device, user, location, and risk conditions by applying network security policies tailored to the requirements of your enterprise. Additionally, Microsoft Entra Internet Access provides enhanced security capabilities, such as token replay protection and data exfiltration controls, for Entra ID federated applications.

 

Figure 2: Rich user, device, location, and risk awareness of Conditional Access for network security policy enforcement

 

 

Protect your users with context aware network security

 

With Microsoft Entra Internet Access you now can link your network security policies to Conditional Access, providing a versatile tool that can adapt to various scenarios for your SWG policy enforcement. Now with web category filtering, you can easily allow or block a vast range of internet destinations based on pre-populated web categories. For more granular control, you can use fully qualified domain name (FQDN) filtering to establish policies for specific endpoints or override general web category policies effortlessly.

 

For instance, you can create a policy that allows your finance team access to critical finance applications, while restricting access for the rest of your organization. Furthermore, you can add risk-based filtering policies that dynamically adapt to a user’s risk level with Entra ID protection to restrict access to these destinations for members whose user risk is elevated, providing additional protection for your organization. Another great example is just-in-time access to Dropbox, while blocking all other external storage sites, to leverage deep integrations between Microsoft Entra Internet Access, Conditional Access and Entra ID Governance workflows.

 

In the coming months, we’ll be adding new capabilities such as TLS inspection and URL filtering to provide even more granular control for your web filtering policies. Plus, we’ll be adding Threat Intelligence (TI) filtering to prevent users from accessing known malicious internet destinations.

 

 

Provide defense in depth against token replay attacks with Compliant Network check

 

With the addition of the new Compliant Network control, you can prevent token replay attacks across authentication plane by extending Compliant Network check with Conditional Access for any Entra ID federated internet application, including Microsoft 365 applications. This feature also ensures that users cannot bypass the SSE security stack while accessing applications. Compliant network eliminates inherent disadvantages of source IP based location enforcement – that of cumbersome IP management and traffic hair pinning of remote users through branch networks.

 

 

Protect against data exfiltration by enabling universal tenant restrictions (TRv2) controls

 

With Microsoft Entra Internet Access you can enable Universal Tenant Restriction controls across all managed devices and network branches, agnostic of OS and browser platform. Tenant Restriction v2 is a strong data exfiltration control enabling you to manage external access risks from your managed devices and networks by curating a granular allow or deny list of foreign identities and applications that can or cannot be accessed.

 

Figure 5: Universal tenant restrictions

 

Avoid obfuscating original user source IP

 

Traditional third-party SSE solutions hide the original source IP of users, only showing the proxy IP address, which degrades your Entra ID log fidelity and Conditional Access controls. Our solution proactively restores original end-user source IP context for Entra ID activity logs and risk assessment. It also maintains backward compatibility for source IP based location checks in your Conditional Access policies.

 

 

Deliver fast and consistent access at a global scale

 

Our globally distributed proxy, with multiple points of presence close to your user, eliminates extra hops to optimize traffic routing to the internet. You can connect remote workers and branch offices through our global secure edge that’s only milliseconds away from users. We have thousands of peering connections with internet providers and SaaS services, and for services like Microsoft 365 and Azure, you avoid performance penalties through additional hops and improve overall user experience by sending the traffic directly to Microsoft WAN infrastructure.

 

Figure 7: Microsoft's global Wide Area Network (WAN)

 

Attain deep insights and network analytics using in-product dashboards:

 

Our comprehensive in-product reports and dashboards are designed to be easy to digest and share a complete holistic view of your entire ecosystem within your organization. You can monitor deployment status, identify emerging threats through comprehensive network and policy monitoring logging, and address problems quickly. Our dashboard delivers an overview of the users, devices, and destinations connected through Microsoft’s SSE solution. We show cross-tenant access within your enterprise, as well as the top network destinations in use and other policy analytics.

 

Figure 8: In-product dashboard

 

Microsoft Entra Internet Access architecture overview

 

Microsoft’s SSE architecture for client and branch connectivity streamlines network access and security. Global Secure Access standalone client on the endpoint is currently available for Windows and Android; MacOS and IOS are coming soon. Branch connectivity relies on site-to-site connections from network devices to Microsoft’s SSE edge services; Microsoft traffic is now available, with Internet Access Traffic being added soon. Traffic from both client and branch connectivity models is secured and tunneled through Microsoft’s SSE edges. Additionally,  we have partnered with HPE Aruba and Versa to integrate our SSE solution with their SD-WAN offerings, with additional SD-WAN partners coming soon.

 

Side-by-side interoperability with third-party SSE solutions

 

One of the unique advantages of Microsoft’s SSE solution is its built-in compatibility with third-party SSE solutions where it allows you to acquire only the traffic you need to send to Microsoft’s SSE edges. For example, you can enable the Microsoft Traffic profile to manage Microsoft 365 and Entra ID traffic and optimize performance for your Microsoft applications while using other providers for remaining traffic. Configuring traffic forwarding profiles is straightforward, allowing for precise control over traffic for internet and SaaS traffic, including Microsoft 365. Traffic profiles are also user aware and can be directed to specific groups in your enterprise as appropriate.

 

Figure 9: Flexible deployment options

 

Conclusion

 

Microsoft Entra Internet Access offers a robust, identity-centric SWG solution that secures access to internet and SaaS applications. By unifying Conditional Access policies across identity, endpoint, and network, it ensures every access point is safeguarded, adapting to the needs of a hybrid workforce and mitigating sophisticated cyberattacks. This strategic shift not only enhances security but also optimizes user experience, demonstrating Microsoft's commitment to leading the transition to cloud-first environments.

 

Learn more and get started 

 

Stay tuned for more Microsoft Entra Internet Access blogs and for a deeper dive into Microsoft Entra Private Access. For more information, watch our recent Tech Accelerator product deep dives.

 

To get started, contact a Microsoft sales representative, begin a trial, and explore Microsoft Entra Internet Access and Microsoft Entra Private Access general availability. Share your feedback to help us make this solution even better. 

 

Anupma Sharma, Principal Group Product Manager

 

 

Read more on this topic

Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available  Microsoft’s Security Service Edge products now in General Availability  Microsoft Entra Internet Access Microsoft Entra Private Access

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community 

Civic

Civic and Rentality Verify Drivers’ Licenses and Age Onchain, Bringing New Standard for Car Rental Security and Compliance

The Civic ID Verification Pass provides real-world benefits to users who can verify their identity and age quickly and rent a car directly from a car owner, without intermediaries SAN FRANCISCO, 18 SEPTEMBER: Civic, a leader in tokenized identity on the verifiable web, joins forces with Rentality, the first web3 car rental platform, to securely […] The post Civic and Rentality Verify Drivers’ Li

The Civic ID Verification Pass provides real-world benefits to users who can verify their identity and age quickly and rent a car directly from a car owner, without intermediaries SAN FRANCISCO, 18 SEPTEMBER: Civic, a leader in tokenized identity on the verifiable web, joins forces with Rentality, the first web3 car rental platform, to securely […]

The post Civic and Rentality Verify Drivers’ Licenses and Age Onchain, Bringing New Standard for Car Rental Security and Compliance appeared first on Civic Technologies, Inc..


Tokeny Solutions

Tokeny’s Talent | Shurong

The post Tokeny’s Talent | Shurong appeared first on Tokeny.
Tokeny's Talent 18 September 2024 Tokeny’s Talent | Shurong Tokeny’s Talent | Shurong Tokeny's Talent 18 September 2024 Shurong Li is Head of Marketing at Tokeny, she joined the company in 2018. Reflecting on the 6-year Journey You’ve been with the company for six years now, starting as an intern and now leading the department. How has the company supported your growth during this time?

Giving up is how we define failure. Thanks to this spirit, we’ve been given a safe and supportive environment to try, fail, and try again until we succeed. It feels like each single one of us is an independent entrepreneur within a group of entrepreneur communities. We embrace failures and we celebrate wins. I’m really thankful for being part of the team where I feel fully empowered and trusted throughout each stage, from the beginning as a junior to now holding a leadership position. This drives me to push my limits to keep learning and growing.

Tokeny’s Culture Involvement Tokeny has grown significantly since you joined. How has the company culture evolved in your opinion?

In the early days, when we were just a team of 8, everything felt like an exciting experiment. We moved extremely fast, and exchanged ideas freely, and every day was a new opportunity to innovate. Our culture thrived on flexibility and passion. As we grew to over 40, things began to shift. All departments grew bigger, and communication across teams became more complex.

“I’m impressed by how we’ve managed to keep our agility, responding quickly to new challenges while becoming more organized.” “I’m impressed by how we’ve managed to keep our agility, responding quickly to new challenges while becoming more organized.”

We’ve introduced standardized processes and well-defined personal objectives and key desired results that have brought order without stifling creativity. Our culture, in my view, has only become stronger. We’ve managed to keep a safe and supportive environment where everyone can do their best work, knowing they have a clear goal to achieve and a process to follow. It’s this balance between agility and structure that makes me excited about our future.

The last point I’d like to emphasize is that our team is committed to both achieving results and maintaining work-life balance. When urgent matters arise, we tackle them swiftly and efficiently, ensuring that nothing is left unresolved. At the same time, if no urgent issues arise, we ensure that our team members can fully enjoy their holidays, recognizing that recharging is essential for sustained performance. Management genuinely puts people first, valuing rest as a way to prevent burnout and keep energy levels high. This balanced approach allows us to consistently deliver exceptional results while keeping the team motivated and at their best.

Leadership Style Now that you’re the Head of Marketing, how would you describe your leadership style, and how do you ensure that the collaborative and supportive environment you first experienced continues to thrive under your guidance?

When I think about leadership, I often reflect on the lessons I’ve learned from those who have led me, especially our CEO, Luc. Luc is a true visionary, someone who inspires everyone around him. Watching him lead, I realized that being a great leader isn’t just about giving orders, it’s about being a supportive mentor who inspires others to reach their full potential. I would describe my leadership style as such as well.

“Being a great leader isn’t just about giving orders, it’s about being a supportive mentor who inspires others to reach their full potential.” “Being a great leader isn’t just about giving orders, it’s about being a supportive mentor who inspires others to reach their full potential.”

I believe in the power of starting with why. Whenever I give guidance, I always begin by explaining why a task is important and why we should approach it a certain way. This approach helps my team understand the bigger picture and see how their work fits into our overall goals. And often, it sparks new ideas and better solutions, as team members feel empowered to contribute their perspectives.

At the heart of my leadership philosophy is a simple belief: it’s all about caring for people. I strive to create an environment where everyone feels safe, supported, and valued. To me, leadership is about being human and supporting people to achieve great things. After all, when people feel cared for and understood, they are more likely to bring their best selves to their work.

In the end, my goal is to inspire my team, just as Luc has inspired me, to believe in themselves and in the impact they can make. I think all of the leaders at Tokeny have a similar approach, and this is what drives Tokeny to achieve extraordinary things. By building a culture of trust, clarity, and shared purpose, we honor one of Tokeny’s core values: putting people first.

Company Values in Practice You mentioned in your previous interview how much you appreciated the creativity and boldness the company encourages. Can you share an example of a project where you or your team took a bold approach, and how it was received?

Creating the non-profit ERC-3643 Association was one of the boldest steps we’ve taken, driven by our vision of unlocking open finance for everyone. At first, it seemed like we might have more to lose than to gain. By forming a non-profit association, we welcomed contributions from anyone. We knew it would drive opportunities for others and create more competition.  However, we believed in a bigger picture.

Our mission is to break down the silos of finance, because that’s the only way we, as an industry, can achieve what we all want: an open and connected financial world. Forming a non-profit association was a bold decision we made to reach that goal, and it paid off.

Today, ERC-3643 is recognized by financial institutions and governments as a market standard. More than 75 members have joined the association, and numerous partnerships have been formed thanks to interoperability. It has even been awarded the “Best Initiative of the Year” by Deloitte. We will continue contributing to the industry through this association to accelerate the adoption of onchain finance.

Reflections and Future Outlook If you could give advice to your younger self, just starting out at Tokeny, what would it be?

If I could give my younger self advice, it would be to start long-term content creation much earlier. I began focusing on it only four years ago, but writing is valuable for all professionals, not just those in marketing. It helps solidify knowledge, clarify thoughts, and deepen understanding. I’ve grown to enjoy it so much that it’s become a personal habit, where I write and reflect regularly. It’s not just about writing more, but about crafting concise and impactful communication. This practice has sharpened my thinking and helped me quickly dive into and understand any new topic.

As someone who has been with the company through significant milestones, where do you see Tokeny going in the next five years, and how do you envision your role evolving in that journey?

Over the past seven years, I’ve witnessed the market shift from having no institutional interest to now working closely with many of them. The transformation has been remarkable. We’re currently at the early adopter stage of the technology adoption curve of tokenization. The next five years will be the most exciting yet, as we expect massive adoption to really take off. Looking ahead, I see Tokeny playing a pivotal role in accelerating the adoption of onchain finance.

“I truly believe this is just the beginning of our journey to thrive.” “I truly believe this is just the beginning of our journey to thrive.”

While the technology itself is not an issue when working with a provider like us, operational shifts are challenging because they change how the value chain works, requiring each stakeholder to adapt. As a tech provider, our goal is to make the integration and operation processes as seamless as possible, ensuring everything runs smoothly. However, for this to happen, everyone in the value chain needs to understand the benefits of tokenized assets, as well as the risks of not adopting them, so they don’t become blockers to adoption.

All it takes is market education. My role will continue to create educational and engaging content that people are genuinely interested in reading, watching, and sharing. By spreading knowledge, we can guide professionals to thrive and understand why they should embrace onchain finance and how they can succeed in this transition. Together, we can drive the future of finance, transforming the way the world transfers and manages value.

More Stories  Tokeny’s Talent | Shurong 18 September 2024 Tokeny’s Talent | Omobola 25 July 2024 Tokeny’s Talent | Cristian 13 June 2024 Tokeny’s Talent | Adrian 15 May 2024 Tokeny’s Talent | Fedor 10 April 2024 Tokeny’s Talent | Fabio 16 February 2024 Tokeny’s Talent | Gonzalo 24 November 2023 Tokeny’s Talent | Denisa 26 October 2023 Tokeny’s Talent | Ali 29 September 2023 Tokeny’s Talent | Tiago 27 July 2023 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Shurong appeared first on Tokeny.


Dock

$CHEQ $DOCK Token Merger Approved: An Alliance for Decentralized Identity Adoption

We are thrilled to announce that the token merger between cheqd and Dock has been officially approved by both $CHEQ and $DOCK holders.  By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and

We are thrilled to announce that the token merger between cheqd and Dock has been officially approved by both $CHEQ and $DOCK holders. 

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Dock and cheqd will continue as independent companies serving distinct market sectors in unique ways. cheqd will continue to advance payment infrastructure and network-layer functionalities, while Dock will continue focused on issuance, verification, and monetization of verifiable credentials for Identity Solution Providers, including KYC, background check, and biometrics companies through their Certs platform. Read more about the alliance here.

With the approval of this token merger, $DOCK tokens will be swapped for $CHEQ tokens at the ratio of 18.5178 $DOCK to 1 $CHEQ. This is based on a 15 day historic average using the closing prices of both tokens. The migration is estimated to commence in the latter half of Q4. More details will be available soon.

Dock’s historical and future transactions will be migrated to the cheqd blockchain, guaranteeing continuity and providing enhanced functionality for all ongoing Dock operations.

Browse our FAQ to learn more about the alliance and token merger.

Majority Approval from cheqd and Dock Communities

The governance vote resulted in a 100% approval from both $CHEQ holders and $DOCK holders

This strong backing from both communities reflects the shared belief in the potential of this merger to unlock new opportunities for all parties involved and drive the future of decentralized identity.

What Does the Merger Mean for Dock and cheqd?

The two companies—cheqd and Dock—will remain independent legal entities, with projects and roadmaps remaining largely unchanged.

One of the most significant benefits of this collaboration is the increased interoperability it will provide. Dock will transition to a blockchain that is already being utilized by key players in the digital identity sector. By aligning ourselves with a widely adopted blockchain, we are positioning our solutions within a broader, interconnected ecosystem.

As a $DOCK token holder, this merger with $CHEQ brings a host of compelling benefits that enhance both the value and utility of your tokens, such as increased token liquidity, access to enhanced resources and tokenomics that benefit holders. Read all about the holder benefits.

Additionally, Dock’s migration of network traffic to cheqd will significantly boost activity on the cheqd network, bringing approximately 300% more traffic to the mainnet and 50% to the testnet. This will accelerate network effects, driving more adoption across industries and use cases.

This collaboration is set to increase demand for $CHEQ, as more identity transactions will occur across cheqd’s infrastructure, supporting a broader ecosystem of verifiable credentials and increasing token burn.The partnership of cheqd and Dock’s established ecosystems will forge a powerful network of over 100,000 community members and hundreds of active partners.

What Happens Next?

As we move forward, cheqd and Dock will announce the commencement dates for the following key activities:

Token Migration: The migration of $DOCK tokens to $CHEQ is expected to begin in the latter half of Q4. Porting Blockchain Transactions: Existing blockchain transactions on the Dock chain will be ported to the cheqd blockchain.

The cheqd and Dock teams will work closely with exchanges to facilitate the token migration, ensuring a seamless transition for all trading activities.

Post-migration, Dock will default to using the cheqd network, though we will still support clients who request to use an alternative chain, multiple blockchains, or ledgerless identity systems. We believe defaulting to the cheqd chain will ensure that Dock continues to operate within the most advanced and secure decentralized ID ecosystem.


A Defining Moment for the Decentralised Identity Market

By merging the $DOCK token with $CHEQ, we are unlocking unprecedented opportunities for our community, positioning you at the cutting edge of decentralized identity innovation.

The future of decentralized digital identity is bright, and with your $CHEQ tokens, you'll be part of a dynamic, growing ecosystem that is set to lead the industry. 

Dock and cheqd will shape a world where secure, verifiable credentials are the norm, and your involvement is key to making this vision a reality. The journey ahead is filled with potential, and we are thrilled to have you with us as we pave the way for the next era of digital identity.


PingTalk

Best Buy Boosts Employee, Vendor, Contractor Efficiency, Experience

Best Buy enhances efficiency and security for employees, vendors, and contractors with Ping Identity's IAM solutions. Learn how in this detailed customer case study.

 

Walking into a Best Buy is a consumer electronics dream. Upon entering, you see the familiar and welcoming Best Buy “blue shirts” and know your electronic goals will be met. In the store, you will also see other shirts with logos of Best Buy partners like Apple, Microsoft, Samsung, and more, who collaborate with the company to help customers meet their varying technological needs. It’s truly a pretty awesome one-stop-shop experience, but did you ever stop to think about the complexity of the systems that allow these groups to work together in a shared space? For example, a Microsoft employee will probably not want to use an iPad, and an Apple employee should not be able to see Microsoft customer information and sales data. There are countless complexities with all of these vendors operating together in the same store. And all of these complexities are occurring in more than 1,100 locations globally.

 

Fortune 100 consumer electronics retailer Best Buy has not only nailed these very complex and numerous use cases, but it has done so with astounding efficiency. I recently had the pleasure of chatting with Greg Handrick, Director of Identity and Access Management (IAM) and Cryptography, and Vinodh Rajagopalan, Associate  Engineering Director of IAM, and they explained how identity is driving efficiencies and secure yet pleasant user experiences for their employees, vendors and more.

 

Greg set the stage by explaining, “IAM is 100% centralized at Best Buy. Our team has global responsibility for all enterprise identities, which includes all employees, contractors, non-human accounts, bot accounts and vendors. We have a total of 180,000 identities under management.”

 

Best Buy began its journey with Ping in 2009, using PingFederate with a very niche use case. By 2020, Best Buy was experiencing issues with its IAM infrastructure, which consisted of Oracle Access Manager, Microsoft ADFS, SecureAuth and some homegrown solutions, all running on-premises. Vinodh explained, “Things were too complex. We didn’t have great support from our existing vendors, and we also began finding some bugs. But what was really important was our increasing need for flexibility and the ability to customize certain solutions.”


BlueSky

Bluesky’s Current Efforts on Trust and Safety

This is a big quarter for Trust and Safety at Bluesky, as we work on a large number of improvements. Here’s a preview of everything that is in progress.

In August, we published a blog post on anti-toxicity features that Bluesky’s product team designed with the Trust & Safety team. You can read that blog post here.

Trust and Safety (T&S) encompasses how we make all aspects of the Bluesky app a safe and enjoyable experience for users, covering the processes, policies, and the product. As Bluesky’s Head of T&S, my goal is to understand where the biggest gaps in user needs are and how to address them to ensure that people have a pleasant experience on Bluesky.

This is a big quarter for Trust and Safety at Bluesky, as we work on a large number of improvements. Here’s a preview of everything that is in progress!

Ban evasion and multi-account detection capabilities

People deserve to have an experience free from harassment on Bluesky. While harassers can be infinitely creative in how they avoid detection, we’re working on tooling to reduce their impact. For example, we’re adding more friction to their ability to create new accounts. We currently register users for additional defenses when we see a pattern of new account harassment, but in the future, we'll be able to better detect and surface when multiple new malicious accounts are created and managed by the same user.

Toxicity detection experiments

Addressing toxicity is one of the biggest challenges on social media. On Bluesky, the two areas that made up 50% of user reports in the past quarter are for content that is rude and for accounts that are fake, scams, or spam. Rude content especially can drive people away from forming connections, posting, or engaging for fear of attacks and dogpiles.

In our first experiment, we are attempting to detect toxicity in replies, since user reports indicate that is where they experience the most harm. We’ll be detecting rude replies, and surfacing them to mods, then eventually reducing their visibility in the app. Repeated rude labels on content will lead to account level labels, and suspensions. This will be a building block for detecting group harassment and dog-piling of accounts.

Automating spam and fake account removals

Harm on social media can happen quickly. For example, if a fake impersonation account asks for a fund transfer, it might take only a matter of minutes before someone falls for a scam. We’re launching a pilot project to automatically detect when an account is clearly fake, scamming, or spamming users to hopefully reduce the likelihood this happens. We’re hoping that this project, paired with our moderation team, can cut down the action time for these reports to within seconds of receiving a report.

Feedback on moderation reports

In the coming months, we’re working to move away from communicating with users about violations via email to communicating through the Bluesky app. Users will receive notices of infractions or labels within the app. We’ll also send outcomes of your own reports through the app as well.

Geography-specific labels

In some cases, content or accounts may be allowed under Bluesky's Community Guidelines but violate local laws in certain countries. To balance freedom of speech with legal compliance, we are introducing geography-specific labels. When we receive a valid legal request from a court or government to remove content, we may limit access to that content for users in that area. This allows Bluesky's moderation service to maintain flexibility in creating a space for free expression, while also ensuring legal compliance so that Bluesky may continue to operate as a service in those geographies. This feature will be introduced on a country-by-country basis, and we will aim to inform users about the source of legal requests whenever legally possible.

Designing video on Bluesky for safety

We recently launched video on Bluesky, and the T&S team has been working with the product team to ensure the feature is launched safely.

Here’s a look at how T&S works together with product. The product team puts together a document listing what they intend to build. Trust and Safety then assesses the risks associated with the feature, and makes recommendations to minimize harms that are most likely from that feature. This ensures that we anticipate harms and integrate mitigations before launch.

For video, Trust & Safety has incorporated various features like being able to turn off auto-play or ensuring that reports can be made and labels applied to content. You can read more about the available safety tooling for video here.

We try to be pragmatic in building the safety elements that most people will need prior to launch, but there’s always room for more improvements in response to user feedback. So after a product launches, we pay close attention to reports and support requests as we improve the feature.

List changes to restrict abuse

Lists are a powerful way to have more control over your experience on Bluesky. You’re able to curate your favorite users, or to filter individuals out from your Bluesky experience — and to share those lists with others, so they can benefit from your curation as well.

However, sometimes bad actors use lists to harass others and violate our rules, so we’re making some changes. We have recently updated starter packs to remove members when blocked, and are doing the same for curated lists. Prior to this, the Bluesky Trust & Safety team has only been able to take down entire lists as a moderation action, instead of removing specific individuals. For moderation lists, this would mean that we’d unintentionally erase blocks. Now, when you block the creator of a list that you are on, you will get removed from the list. This behavior doesn’t apply to moderation lists since that would defeat their purpose.

We will also be starting a widespread effort to identify lists with toxic and abusive names or descriptions. Lists with names or descriptions that violate the Bluesky Community Guidelines will be hidden in the app until or unless their creator modifies them to comply with our rules. We will also take further action against users that repeatedly create abusive lists.

Lists continue to be an area of active discussion and development for our team to find the right balance for user safety.

Prioritizing User Concerns

This section provides some transparency on how we prioritize T&S efforts across the organization.

We read your concerns raised via reports, emails, or mentions to @safety.bsky.app. Our overall framework is asking how often something happens vs how harmful it is. Then we focus on addressing high-harm/high-frequency issues while also tracking edge cases that could result in serious harm to a few users.

For example, a small number of accounts have been harassing a few people on the app by creating multiple accounts and targeting the user repeatedly. Although this happens to a tiny fraction of users, it causes enough continual harm that we want to take action to prevent this abuse.

As always, your feedback is welcome through comments or by reaching out to moderation@blueskyweb.xyz.

Tuesday, 17. September 2024

Safle Wallet

Safle Community Explorer Carnival: Your Epic Adventure Begins!

Ready to explore the future of Web3? The Safle Community Explorer Carnival is launching soon, bringing you an exciting series of challenges designed to unlock the full potential of Safle Wallet and Safle Lens. Each challenge takes you deeper into the Web3 universe, where you’ll explore new chains, discover groundbreaking dApps, and level up with valuable XP! 🌌 Compete to climb the leaderboar

Ready to explore the future of Web3? The Safle Community Explorer Carnival is launching soon, bringing you an exciting series of challenges designed to unlock the full potential of Safle Wallet and Safle Lens. Each challenge takes you deeper into the Web3 universe, where you’ll explore new chains, discover groundbreaking dApps, and level up with valuable XP! 🌌

Compete to climb the leaderboard and earn from a massive rewards pool in Safle Tokens! Don’t miss your chance to be a top explorer and shape the future of Web3!

Here’s a sneak peek at the action-packed quests coming your way:

🚀 Ignite the Safle Hype: The Saflenaut Journey Begins!

Get your engines roaring because the carnival is just around the corner — and guess what? YOU are the spark to ignite the buzz! Ready to suit up and blast off into the Web3 cosmos?

Think you’ve got your GAME ON? Welcome to the Saflenaut Mission — where your Web3 universe takes off. The more you rally, the bigger the adventure!

💥 Rootstock Troop

Gear up for an explosive mission on the Rootstock chain! Navigate, explore, and interact with dApps in a whole new way as you unlock the power of Safle Wallet’s latest integration. Adventure awaits those brave enough to take the plunge.

🚀 The BEVM Rocket

Strap in for a rocket-fueled journey to the BEVM chain! This isn’t just any mission — it’s your chance to discover how Safle Wallet takes cross-chain functionality to the next level. Ready to fire up those engines?

🏔 Avalanche Explorer

Prepare to conquer the Avalanche! Scale new heights and unlock powerful rewards as you interact with dApps in Safle Wallet. Are you ready to make your mark in the Avalanche ecosystem?

🔮 Polygon zkEVM Pioneer

The future of Web3 scalability is here, and YOU can be one of the first to explore it! Enter the Polygon zkEVM frontier and uncover the cutting-edge technology Safle has seamlessly integrated. Your pioneering spirit is about to be rewarded!

🌠 Base Voyager

Ever wanted to be a true explorer of the Base chain? Now’s your chance! Mint NFTs, engage in games, and experience the magic of Web3 on an entirely new level — all from the comfort of your Safle Wallet.

👁️ The Safle Lens Explorer

Prepare to see your portfolio like never before with Safle Lens! Whether it’s detecting spam tokens or NFT, interacting with our AI, or uncovering hidden gems, this quest will open your eyes to Safle’s most exciting features yet.

🏆 And There’s More!

Complete multiple quests, level up with multipliers, and claim your share of an airdrop worth 15k USD in USDT, Safle Tokens & RBTC! As you journey through the Carnival, the rewards will keep stacking up. The more you play, the bigger your prize!

This is no ordinary quest — it’s an epic adventure. Mark your calendars, gather your crew, and get ready to level up in the Safle universe. The Safle Community Explorer Carnival is about to go live… will you rise to the challenge?

Keep a lookout 👉🏻 Follow Safle

Join the community 👉🏻 Join Discord


auth0

Auth0 Forms Is Now Generally Available!

We're excited to announce the general availability of Auth0 Forms, a powerful visual editor that empowers you to create custom, dynamic forms that integrate seamlessly with your authentication flows.
We're excited to announce the general availability of Auth0 Forms, a powerful visual editor that empowers you to create custom, dynamic forms that integrate seamlessly with your authentication flows.

Indicio

Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options

The post Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options appeared first on Indicio.

By Ken Ebert

As more decentralized identity and verifiable credential solutions get to market, many vendors only offer a Software-as-a-Service (SaaS) because of its ease of use and scalability. However, when it comes to managing verifiable credentials containing personal data, businesses, and especially governments, need to carefully assess where the platforms or software they depend on are hosted. In this blog, we’ll talk about how our platform for decentralized identity, Indicio Proven, supports requirements for data locality, compliance with regional regulations, and the security of personal data.

Assessment of data locality and regulatory compliance

Data residency is a key consideration when using a SaaS solution for verifiable credentials. A SaaS model for deployment may store or process data in multiple regions globally. While vendors often offer region-specific hosting, there are still challenges to ensuring that personal data is only processed in authorized geographic locations. This issue becomes even more pressing for government agencies and sectors dealing with sensitive citizen information, where the stakes for compliance are higher.

Governments around the world are beginning to operate under strict data sovereignty laws that dictate where personal data can be processed and stored. Regulations like the General Data Protection Regulation (GDPR) in the European Union, Australia’s Privacy Act, or Canada’s PIPEDA create stringent requirements for how personal data  is handled, especially when it comes to cross-border data flows.

For organizations in Europe, the eIDAS (Electronic Identification, Authentication and Trust Services) regulation is the framework shaping the future of digital identity. Compliance with eIDAS and other regional regulations requires careful attention to where and how sensitive data is processed and stored. 

For many organizations, the risks associated with using a SaaS model hosted in a foreign jurisdiction may outweigh the benefits, particularly if the service provider cannot guarantee that data will remain within the required geographical boundaries.

On-premise deployment: The case for control

For businesses and governments that require the strictest control over data processing, an on-premise deployment offers a secure alternative. This model allows organizations to manage verifiable credential platforms and solutions within their own environment, ensuring that sensitive personal data never leaves their infrastructure. In an on-premise deployment, verifiable credentials and the underlying issuance and verification infrastructure are fully managed, controlled, and protected by the organization, minimizing the risks of external breaches or compliance failures.

On-premise deployments are particularly appealing to financial services and healthcare, where stringent data protection regulations demand maximum control over personal data. 

Indicio’s Differentiator: Offering Both SaaS and On-Premises Solutions

Despite the clear advantages of on-premise deployment for critical data applications, few vendors offer on-premise deployment as an option. This is where Indicio stands out as a solution provider, with both SaaS and on-premises deployment options for businesses and governments to  meet their unique operational, privacy, and regulatory needs.

For those organizations that need the convenience and scalability of a cloud-based solution, Indicio Proven can be used as a fully-managed service. We handle the operational complexity of running the decentralized identity infrastructure, including regular maintenance, security updates, and compliance with global data protection regulations. This allows our clients to focus on their core operations while knowing that their verifiable credential solution is secure and up to date.

For organizations with stricter data-control requirements, Indicio Proven can be deployed on-premise to ensure that the  personal data in verifiable credentials is never processed or stored outside their control.

The benefits of Indicio’s flexible deployment approach

By offering both SaaS and on-premises deployment options, Indicio provides organizations with the flexibility to choose the model that works best for them. Here are the key benefits of working with Indicio:

1. Tailored to Your Needs: Whether your organization prioritizes the ease and scalability of SaaS or requires the security and control of on-premises, Indicio has a solution that fits. We understand that no two organizations are the same, and our dual deployment model ensures that you don’t have to compromise on security or convenience.

2. Operational Excellence: For our SaaS customers, Indicio takes on the full responsibility of managing the infrastructure for issuing and verifying credentials. We handle maintenance, upgrades, and security patches, ensuring that your system runs smoothly and securely at all times. Our superb customer service ensures that you receive the support you need when you need it.

3. On-premise control: For organizations that require more control, Indicio’s on-premises option allows them to manage their Indicio Proven instance  within their own environments. This deployment gives businesses and governments the ability to safeguard data, maintain compliance, and reduce risks associated with external data handling.

4. Regulatory compliance: Whether SaaS or on-premise, Indicio’s solutions are built with compliance in mind. We ensure that our systems meet the highest standards of security and data protection, giving you confidence that your decentralized identity solution will align with regulations like eIDAS, GDPR, and other regional frameworks.

Conclusion

As decentralized identity and verifiable credentials continue to shape the future of secure online interactions, businesses and governments must carefully evaluate their deployment options. SaaS models offer scalability and ease, but for organizations with stringent data control requirements, an on-premises deployment may be the best choice.

Indicio’s unique ability to provide both SaaS and on-premises solutions sets us apart in the market. Whether you need the operational simplicity of a managed SaaS environment or the control of an on-premises deployment, Indicio offers a flexible solution tailored to your needs, ensuring the security, compliance, and reliability of your decentralized identity infrastructure.

In an evolving regulatory landscape, Indicio is here to help you navigate the complexities of decentralized identity—offering superb customer service, operational excellence, and the flexibility to choose the deployment model that works best for you.

Contact us to learn more about how Indicio can support your verifiable credential deployment needs. 

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options appeared first on Indicio.


This week in identity

E57 - Back to School 2024 Episode

Summary In this episode of the Week in Identity podcast, Simon and David discuss the latest trends and developments in identity security, including market activity, funding rounds, and significant acquisitions. They delve into the importance of NIST guidelines, the rise of non-human identity (NHI), and the implications of recent acquisitions by MasterCard and Salesforce. The conversation highlig

Summary

In this episode of the Week in Identity podcast, Simon and David discuss the latest trends and developments in identity security, including market activity, funding rounds, and significant acquisitions. They delve into the importance of NIST guidelines, the rise of non-human identity (NHI), and the implications of recent acquisitions by MasterCard and Salesforce. The conversation highlights the evolving landscape of identity management and the critical need for organizations to adapt to new challenges in cybersecurity.


Chapters

00:00 Introduction to the Week in Identity Podcast

03:52 NIST Guidelines and Identity Assurance

06:30 Aembit Funding Rounds and Non-Human Identity

13:42 Acquisitions in Identity: IndyKite and 3Edges

20:17 MasterCard and Recorded Future

26:39 Salesforce and Own Data







KuppingerCole

Building Resilient IAM Systems: The Limits of IGA Customization

by Martin Kuppinger Customizing Identity Governance & Administration (IGA) within Identity & Access Management (IAM) is a common practice, but how much is too much? This question becomes more pertinent as organizations increasingly seek to adapt COTS (Commercial Off-The-Shelf) and IDaaS (Identity-as-a-Service) solutions to their specific needs. The tendency to “over-customize” remains pre

by Martin Kuppinger

Customizing Identity Governance & Administration (IGA) within Identity & Access Management (IAM) is a common practice, but how much is too much? This question becomes more pertinent as organizations increasingly seek to adapt COTS (Commercial Off-The-Shelf) and IDaaS (Identity-as-a-Service) solutions to their specific needs. The tendency to “over-customize” remains prevalent, even as IDaaS solutions evolve. Iso, let us explore when customization makes sense and, more importantly, how to avoid the pitfalls that come with excessive modification.

Customization vs. Configuration: Let’s Clarify 

First, let’s clarify what we mean by “customization.” Customization involves writing new code—whether through traditional coding, low-code, or no-code platforms. Configuration, on the other hand, refers to adjusting settings within the system, ideally through the user interface or, if necessary, via configuration files. While low-code/no-code approaches have gained popularity, they don’t entirely mitigate the risks associated with customization, especially without proper documentation, version control, and staging environments in place. 

Why Customize IGA Solutions at All? 

The first and most important questions to ask are: Do we need customization in IGA solutions, and to what extent? These are two separate questions. Based on my experience, the amount of customization typically required is far less than many organizations assume. 

Most IAM processes, including the management of Joiner, Mover, Leaver (JML) activities, can be standardized. Yes, there are variations and organization-specific requirements, but these are often at the detail level: How many approvers are required? Should approvals be sequential or parallel? Even these specifics can often be addressed using best practices. Several vendors provide process frameworks, or you can consult experts for tailored frameworks that align with your organization’s needs. 

At the core, every organization needs to onboard employees, manage their access, handle job transitions, and de-provision access when necessary. These are universal requirements, and best practices can address them efficiently. Yet, many organizations still customize excessively, resulting in unnecessary complexity and cost. 

The Real Reasons for Customization 

There are several reasons organizations end up with highly customized IGA solutions: 

Legacy Processes: Many organizations are reluctant to let go of legacy processes, opting to map outdated workflows onto new systems. Worse, when organizations have multiple sites with their own “ways of doing things,” customization often spirals out of control.  Lack of Standard Frameworks: While process frameworks exist, not enough vendors offer them out-of-the-box, forcing organizations to build their own—often from scratch.  System Integrators: Cynics might argue that system integrators benefit from customization projects. However, this overlooks the downsides: dissatisfied customers, extended project timelines, and increased risk.  Does Switching Tools Solve the Problem? 

Many organizations, when faced with a failing IAM (IGA) system, rush to replace the tool. While a tool change might seem like the solution, it rarely is. The problem usually lies in the approach to customization rather than in the tool itself. Even IDaaS, which inherently supports less customization, only mitigates the issue to a certain extent. 

A well-functioning IGA system doesn’t begin with the tool. It begins with clearly defined policies, processes, and organizational requirements. In projects that suffer from over-customization, the underlying issue is often the absence of well-documented processes. Without this groundwork, simply switching tools won’t help. 

Customization: When and How 

I’m not suggesting that customization is entirely unnecessary. There will always be specific needs that require customization. The key is to minimize unnecessary modifications and do it the right way when needed. 

Rethink Processes: Before diving into customization, take a step back and critically evaluate your processes. Do you really need that custom approval workflow, or is there a best practice you can adopt?  Avoid Backend Coding: A frequent source of trouble in IGA projects arises from coding directly against the backend, such as databases. If the database structure changes in a software update, the custom code breaks. Instead, work through APIs or create an abstraction layer to keep customizations stable.  Segregate Custom Code: Modern IGA solutions provide extensive API support and container-based deployments. Custom code should reside in microservices, consuming the APIs of your IGA system. This ensures that updates to the core system don’t break your custom code. Even if the API changes, the impact is isolated to the specific microservice, minimizing disruptions.  Three Steps to Successful IAM (IGA) Customization 

To ensure your IGA solution withstands necessary customization without failing, follow these steps: 

Define Policies and Processes First: Ensure your processes are thoroughly documented and follow best practices before even considering customization.  Minimize Unnecessary Customization: Many customizations provide little real benefit. Focus on what truly adds value to your organization.  Follow Best Practices in Coding: Build customizations on the Identity API layer of your Identity Fabric, isolate them in microservices, and ensure proper documentation and versioning. 

By following these guidelines, you can deliver an IGA solution that meets your organization’s needs while avoiding the risks and costs of over-customization.


Northern Block

Why Northern Block is Joining the Global Acceptance Network

Northern Block joins the Global Acceptance Network to solve governance challenges and build trust across digital ecosystems. The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Ident

At Northern Block, we are thrilled to announce our participation as a founding member in the newly established Global Acceptance Network (GAN). This initiative is a crucial step towards solving one of the biggest challenges we face in the digital world: the lack of trust in digital interactions.

Think about how seamlessly payments work in the physical world. When you see a Visa logo at a merchant’s point of sale, you immediately know that your Visa card will be accepted. You don’t hesitate to tap your card on the terminal. Unfortunately, we don’t yet have the same level of confidence when it comes to online interactions.

Today’s digital interactions, especially those involving sensitive information like login credentials or payment details, are often fraught with spam, abuse, and fraud. We frequently find ourselves unsure if the transactions we’re engaging in are legitimate. Whether it’s receiving out-of-band communications through SMS or email from organisations claiming to need something urgent from us—often playing on our emotions to compromise our security—we face constant uncertainty. On the other hand, organisations are striving to put their customers at the centre by creating more personalised and seamless experiences, and there’s no better way to achieve this than by obtaining data directly from the source: their customers. However, they need to trust that the data provided has integrity. Without this trust, businesses are forced to implement duplicate verification processes for all their customers, adding friction to the experience and undermining digital transformation efforts.

At Northern Block, we recognized this trust gap early on, which is why we became a founding member of the Trust over IP Foundation in 2020. Our goal wasn’t just to build better technologies but to apply the governance frameworks necessary to solve human trust problems in the digital world. While we’ve made great strides in achieving cryptographic trust—this only solves part of the problem.

Over the past few years, the Trust over IP Foundation has produced significant thought leadership and numerous deliverables, contributing greatly to the evolution of digital trust. Among these achievements, two major innovations stand out as particularly relevant to the Global Acceptance Network:

The Trust Registry Query Protocol: This allows any entity to interact with a trust registry by asking a simple question: “Does Entity X have Authorization Y, in the context of Ecosystem Governance Framework Z?” The Governance Framework Metamodel and toolkit: These tools help capture and implement governance for ecosystems, which have already been successfully deployed in initiatives such as Bhutan’s National Digital Identity Ecosystem and the Global Legal Entity Identifier Foundation (GLEIF).

The Global Acceptance Network builds on the progress made by the Trust over IP Foundation by putting its frameworks into action. While numerous ecosystems today leverage various forms of credentialing and could benefit from sharing data or credentials with others, the real challenge lies in establishing governance standards that ensure these exchanges are trustworthy. This is where GAN comes in.

Much like Visa connects banks, merchants, and consumers within a trusted payment network, GAN’s purpose is to connect digital ecosystems. However, unlike Visa, GAN is not a centralised network and cannot operate as one. Instead, its strength lies in developing relationships with ecosystems and making specific claims about these ecosystems—claims that GAN is uniquely positioned to verify. These claims won’t be about the internal governance or authorities within an ecosystem, but rather about the ecosystem itself and its conformance to GAN’s trust criteria. Over time, as ecosystems are recognised by GAN or linked to the GAN network, the hope is that people and organisations will view these ecosystems as trusted entities, similar to how we implicitly trust the Visa network when we see its logo.

GAN’s ultimate goal is to solve human trust and governance problems by reducing the risks involved in accepting digital credentials or data from outside an organisation’s own ecosystem. This vision is closely aligned with the one we had when the Trust over IP Foundation was formed: a future with thousands of interconnected ecosystems, each with their own governance frameworks. GAN will act as a connector, ensuring that these ecosystems can interact and exchange trusted data, enabling secure, frictionless interactions—just like when we confidently tap our Visa cards at the checkout.

At Northern Block, we provide digital trust solutions that enable ecosystems to produce and manage valuable credentials. As demand for these credentials grows across ecosystems—something the Global Acceptance Network (GAN) can facilitate—the value for our customers increases. Additionally, as a provider of trust registry solutions, which support data models linked to ecosystem authorities and for registry of registries, we aim to ensure that these registries can establish relationships with the GAN trust registry. This further enhances the value and interoperability of the ecosystems we support, driving greater trust and value.

The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider.


KuppingerCole

Offensive Security: Identifying Vulnerabilities Before Attackers Do

by Syed Ubaid Ali Jafri As cyber threats become increasingly sophisticated, organizations must evolve their defense strategies to stay protected. Offensive security, which focuses on identifying and mitigating vulnerabilities before attackers can exploit them, is a crucial aspect of modern cybersecurity. At cyberevolution 2024, Syed Jafri, Head of Cyber Defense & Offensive Security at H

by Syed Ubaid Ali Jafri

As cyber threats become increasingly sophisticated, organizations must evolve their defense strategies to stay protected. Offensive security, which focuses on identifying and mitigating vulnerabilities before attackers can exploit them, is a crucial aspect of modern cybersecurity.
At cyberevolution 2024, Syed Jafri, Head of Cyber Defense & Offensive Security at Habib Bank Limited (HBL), will address these challenges. His expertise in offensive security practices and threat intelligence offers valuable insights for those looking to enhance their organization's defense mechanisms.


AI in Cybersecurity: Risks and Opportunities

by Alexei Balaganski AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively? Let’s start with the ele

by Alexei Balaganski

AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively?

Let’s start with the elephant in the room: ChatGPT is not the pinnacle of artificial intelligence that many believe it to be. In fact, what we often mistake for the GenAI model’s competence is just its astonishing ability to instantly generate a response that sounds coherent and plausible, courtesy of billions of digital monkeys with typewriters.

Unfortunately, what these monkeys are still lacking is the honesty to admit that they don’t know something. Instead, they will happily generate pages of plausibly sounding nonsense (in the industry, this is politely referred to as “hallucinations”). To quote an article I read recently: “For decades, we were promised artificial intelligence. What we got instead is artificial mediocrity.”

Beyond the Hype: The Limits of Large Language Models in Cybersecurity

While ChatGPT may seem like an all-powerful assistant, it is not designed for or particularly good at many of the tasks necessary in cybersecurity. Large language models can write code, analyze texts, and even assist in decision-making, but their potential applications in a high-stakes field like cybersecurity must be approached with careful consideration.

Generative AI thrives on massive datasets. But in cybersecurity, those datasets often contain sensitive, confidential information that you would rather not share with an external model housed in a cloud data center. Add to that the huge computational overhead that these models require, and we are left with an unsustainable approach in the long term. Imagine the environmental costs: running LLMs with cutting-edge encryption, like fully homomorphic encryption, would take us closer to a climate catastrophe than Bitcoin mining ever did.

So, does this mean AI has no role in cybersecurity? Absolutely not. But we need to distinguish between what is hype and what is practical, scalable, and trustworthy.

Practical AI Use Cases in Cybersecurity: What Really Works

Long before ChatGPT was even a concept, machine learning (ML) techniques were already a staple in cybersecurity tools. From anomaly detection to behavioral analytics, AI-driven methods have long been applied to analyze large datasets and identify outliers that might signify a security breach.

The technology behind detecting anomalies, for instance, has been around for decades, well before the GenAI boom. It’s based on statistical methods that have been refined over the years. But here’s where things get tricky - detecting an anomaly is one thing, but determining whether that anomaly poses a real threat is quite another. With traditional methods, you may end up with a flood of anomalies, but with no real insight into which of them demand immediate action.

The most advanced AI/ML tools today do more than just identify anomalies. They correlate them with known attack vectors, connect them to a specific threat framework like MITRE ATT&CK®, and even provide detailed threat artifacts that can be used for further analysis. The real challenge is not in detection, but in correlation, for example, in figuring out which vulnerabilities are actually exploitable in your specific environment. All of this makes for a robust threat detection mechanism, but none of it requires the power of generative AI.

Behavioral Analytics: The Long Game in Cybersecurity

Another area where AI/ML shines is in behavioral analytics - tracking user and system behavior over extended periods to identify potential security risks. But again, this is not the domain of ChatGPT. Traditional ML methods are more than capable of profiling behaviors, identifying deviations from the norm, and flagging potential threats based on those deviations.

The challenge in behavioral analytics is not the technology itself – it is the data. To be effective, behavioral AI tools need access to large, diverse datasets. This is why the most effective solutions come from vendors who operate massive security clouds, collecting behavioral data from a wide range of users, systems, and geographies.

What’s key to understand here is that this method requires continuous learning over time. Unlike the hype around instant results from LLMs, behavioral analytics relies on consistent, long-term data collection to provide meaningful insights.

Threat Intelligence: Where an LLM Can Truly Make a Difference

Knowing your enemy is a major factor in any kind of warfare, not just in cybersecurity. However, in cybersecurity, this struggle is especially unfair – thousands if not millions of malicious actors are out there against us, and somehow, we must collect enough intelligence about them to understand their methods, techniques, and motives.

Unsurprisingly, the Threat Intelligence industry is growing rapidly - both cybersecurity vendors and customers are in constant need of every bit of information that can give them an advantage in defending against the next cyberattack. Unfortunately, a lot of this information is highly unstructured and difficult to quantify. Entire teams of security researchers spend their days trawling the dark web for bits of intelligence about malicious actors.

Natural language processing capabilities of LLMs can dramatically increase their productivity. These AI models can directly interpret textual data like threat reports, social media, and forum posts to assess emerging risks, correlate them with data from different sources, and thus provide up-to-date insights into global cyber threats.

Can AI Handle Automated Incident Response?

One of the most controversial promises of AI in cybersecurity is the potential for automated incident response. In theory, AI could detect a threat and neutralize it without human intervention. In practice, though, there’s a significant trust gap. Many companies remain wary of handing over control of their incident response processes to an AI, no matter how advanced. A poorly designed AI could do more harm than good: imagine it shutting down critical manufacturing systems because it misinterpreted a benign anomaly as a serious threat.

However, we are seeing a shift in attitudes. The explosion of ChatGPT’s popularity has made organizations more open to the idea of AI taking on more responsibility in their security operations. But it’s a gradual process. Many companies are opting for a phased approach, first using AI in a “dry run” mode, where it identifies threats but does not take action. Only after extensive testing do they move to a more automated setup.

But even with this cautious approach, the question remains: should we trust AI to make these decisions for us? In most cases, the answer is still no; at least, not without significant oversight from human operators.

Finding the Balance Between Technology, Risk, and Trust

AI undoubtedly has a role to play in the future of cybersecurity, but we need to keep our expectations grounded in reality. Generative AI is not the silver bullet that many make it out to be - it’s useful in specific contexts, but far from a game-changer in cybersecurity. Instead, we should focus on leveraging the right kind of AI for the right tasks.

As with any emerging technology, trust is earned, not given. In cybersecurity, where the stakes are high, it’s crucial to proceed with caution, ensuring that AI is used to complement human expertise rather than replace it. After all, AI may help us detect threats faster, but it’s human judgment that ultimately keeps our systems safe.

If you’re interested in learning more about AI applications from real human experts, you might consider attending the upcoming cyberevolution conference that will take place this December in Frankfurt, Germany. AI risks and opportunities will be one of the key topics discussed there.


Ontology

Inland Revenue’s Data Breach and Why Web3 Security Needs Decentralized Identity

The recent Inland Revenue data breach serves as a stark reminder of the fragility of centralized systems. When large organizations — whether they be governments, corporations, or tech giants — are responsible for housing vast amounts of sensitive data, a single error can have catastrophic consequences. In this case, it’s tax information. But the implications go much deeper. We’ve seen time a

The recent Inland Revenue data breach serves as a stark reminder of the fragility of centralized systems. When large organizations — whether they be governments, corporations, or tech giants — are responsible for housing vast amounts of sensitive data, a single error can have catastrophic consequences. In this case, it’s tax information. But the implications go much deeper.

We’ve seen time and again how centralized structures, a hallmark of Web2, fail to protect data adequately. Whether through technical vulnerabilities or human error, the result is the same — your personal information is left exposed. This isn’t just about tax records, passwords, or email addresses getting into the wrong hands. It’s about trust. And when that trust is broken, it takes years to rebuild, and we’ve all become painfully aware of how fragile that trust is in today’s digital age.

This is where decentralized identity (DID) comes in. DID flips the script, handing control back to individuals rather than institutions that often mismanage data. With decentralized identity systems, your personal information is no longer stored in a vulnerable central server; it’s distributed across a secure, immutable blockchain. You decide who gets access to your data and under what terms. You own it, you control it, and you can revoke access whenever you want.

Web3 security technologies like Zero Knowledge Proofs, Self-Sovereign Identity, and decentralized storage solutions enable this shift. Instead of depending on a tax department or a tech giant to safeguard your data, you control every aspect of its distribution. Inland Revenue’s mishap should be a wake-up call, a signal that centralized systems are not built for the digital age we now inhabit. The centralized Web2 world is riddled with single points of failure, and as we become more reliant on digital systems, these failures become not just likely but inevitable.In contrast, decentralized systems are trustless by design. You don’t need to trust an organization or a government to protect your data because the system itself is built on cryptographic proofs that ensure privacy and security. It’s about data sovereignty — taking back control over the very information that defines us.

Inland Revenue’s slip-up highlights a deeper truth: centralized data management is outdated and dangerous. The promise of Web3 is a system where users are empowered, not at the mercy of flawed institutions. This isn’t just an evolution in technology; it’s a fundamental shift in how we interact with and protect our personal information. The time has come to embrace decentralized systems, where security, privacy, and control are no longer luxuries but basic rights.Are we ready to leave behind the vulnerabilities of Web2? The Inland Revenue incident suggests we don’t have much of a choice.

Interested in learning more about decentralized identities? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Inland Revenue’s Data Breach and Why Web3 Security Needs Decentralized Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Nov 12, 2024: From Building Application Resilience Amidst Regulatory Shifts

In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critical
In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critical for IT professionals.

Monday, 16. September 2024

1Kosmos BlockID

Streamlining Self-Service User Onboarding with 1Kosmos MFA Integration

In today’s fast-paced digital world, efficient and secure user onboarding is a crucial aspect of any organization’s IT strategy. Imagine users maintaining countless account credentials to log in to their office productivity tools suite. Sounds cumbersome, right? 1Kosmos as a multi-factor authentication partner is here to solve that very issue.   Microsoft, a leader in … Continued The post S

In today’s fast-paced digital world, efficient and secure user onboarding is a crucial aspect of any organization’s IT strategy. Imagine users maintaining countless account credentials to log in to their office productivity tools suite. Sounds cumbersome, right? 1Kosmos as a multi-factor authentication partner is here to solve that very issue.  

Microsoft, a leader in productivity and cloud solutions, has partnered with 1Kosmos to enhance user onboarding through the integration of 1Kosmos. This collaboration offers a streamlined and secure self-service onboarding process, addressing key concerns of identity verification and user experience. 

Self-service onboarding enables users to onboard and enroll their identity independently, reducing the burden on IT and HR teams and improving operational efficiency.  

Traditional onboarding processes often involve multiple steps and require substantial input from users, often leading to significant administrative overhead and potential delays. This conventional approach typically includes a series of manual tasks such as creating user accounts, assigning permissions, configuring access to various systems, and ensuring compliance with security protocols. Each of these steps demands careful attention and coordination from IT personnel to ensure that new hires are properly integrated into the company’s IT infrastructure. This can be time-consuming and prone to human error, especially in large organizations with complex IT environments. 

In contrast, Microsoft’s self-service onboarding solution streamlines this process by allowing new employees to handle many of these tasks independently through a user-friendly interface. This modern approach not only reduces the workload of IT staff but also accelerates the onboarding timeline, enabling new hires to get up and running more quickly. By automating routine tasks and providing a seamless, guided experience for users, self-service onboarding enhances operational efficiency and ensures a more consistent and error-free setup process. 

How 1Kosmos Enhances Self-Service Onboarding: 

1Kosmos provides a robust identity verification solution that enhances the self-service onboarding process. The key benefits of integrating 1Kosmos in Microsoft’s self-service onboarding include: 

Improved Security: 1Kosmos offers sophisticated identity verification through biometric and blockchain-based technology. This ensures that the users being onboarded are legitimate and significantly reduces the risk of identity fraud and unauthorized access.  Enhanced User Experience: Traditional MFA methods can be cumbersome, requiring users to remember and manage multiple credentials. 1Kosmos simplifies this by using biometric data and blockchain technology, which are not only more secure but also more convenient for users.  Streamlined Processes: With 1Kosmos integrated into Microsoft’s onboarding framework, the process becomes more intuitive. Users can complete the verification process quickly using their biometric data, which reduces friction and accelerates the overall onboarding timeline.  Reduced IT Workload: By automating and securing the identity verification process, 1Kosmos reduces the need for IT intervention in the onboarding process. This allows IT teams to focus on more strategic tasks rather than managing routine account setup and security issues. 

1Kosmos seamlessly integrates with Microsoft’s product suites where Microsoft’s suite of productivity tools and cloud solutions benefits greatly from the integration of 1Kosmos This integration ensures that users accessing Microsoft applications, such as Office 365 or Azure, are securely verified through advanced authentication methods. As a result, organizations can maintain a high level of security while offering a user-friendly experience. 

The partnership between Microsoft and 1Kosmos represents a significant advancement in self-service user onboarding. By incorporating 1Kosmos as an MFA factor, Microsoft enhances the security and efficiency of the onboarding process, benefiting both users and IT teams. As organizations continue to prioritize digital transformation, adopting such innovative solutions will be essential for maintaining a secure and efficient IT environment. 

Integrating 1Kosmos with Microsoft’s self-service onboarding process not only enhances security but also improves the overall user experience, setting a new standard for efficient and secure account management. 

The post Streamlining Self-Service User Onboarding with 1Kosmos MFA Integration appeared first on 1Kosmos.


Trinsic Podcast: Future of ID

Calvin Fabre - Envoc's Role in Pioneering Mobile Driver’s Licenses in Louisiana

In this episode, I’m joined by Calvin Fabre, President and Founder of Envoc, a company that has been at the heart of mobile driver's license (mDL) innovation in Louisiana, a state leading the nation in mDL adoption. Calvin shares the fascinating story of how his company helped bring the country’s first digital driver’s license into reality, starting with a simple idea for a “digital glove box.” W

In this episode, I’m joined by Calvin Fabre, President and Founder of Envoc, a company that has been at the heart of mobile driver's license (mDL) innovation in Louisiana, a state leading the nation in mDL adoption. Calvin shares the fascinating story of how his company helped bring the country’s first digital driver’s license into reality, starting with a simple idea for a “digital glove box.”

We dive into a variety of topics, including:

- The journey from bidding on payment processing systems to developing a groundbreaking MDL system for the Louisiana DMV
- How Envoc navigated the complexities of legislation and law enforcement adoption to make digital driver's licenses legal for routine traffic stops
- The importance of user feedback in expanding the LA Wallet app to include hunting licenses, concealed carry permits, and even COVID-19 vaccine cards
- The unique role LA Wallet has played in verifying identity remotely, including for disaster relief and online age verification for adult content
- Insights on the future of digital credentials, from frictionless onboarding to the growing adoption of MDLs in industries like banking and retail

Calvin’s expertise offers a deep dive into the future of identity and digital credentials, making this episode a must-listen for anyone interested in the intersection of technology, law enforcement, and secure digital identification.

You can learn more about Envoc at envoc.com.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


Caribou Digital

Breaking down power imbalances through co-creation

Written by Chelsea Horváth, Measurement & Impact Manager, and Grace Natabaalo, Research & Insights Manager, both at Caribou Digital. Co-creation has become an increasingly important topic and practice within the research, evaluation, and development communities. Like many others in our community of practice, at Caribou Digital, we’re reflecting on co-creation in our work. At first g

Written by Chelsea Horváth, Measurement & Impact Manager, and Grace Natabaalo, Research & Insights Manager, both at Caribou Digital.

Co-creation has become an increasingly important topic and practice within the research, evaluation, and development communities.

Like many others in our community of practice, at Caribou Digital, we’re reflecting on co-creation in our work. At first glance, co-creation seems simple enough — create something with others.

But when the rubber hits the road, sticky questions arise. Who needs to be involved? What information is shared and how? How much time and resources are required to co-create? How is consensus reached? Who makes the final decision? Through trial and error and learning from others in the field, we’d like to share our experience and lessons on co-creation within research.

Caribou Digital’s approach to co-creation

At Caribou Digital, we understand co-creation to be an “approach that brings people together to collectively produce a mutually valued outcome and that involves a participatory process assuming some degree of shared power and decision-making.”

At conferences and in requests for proposals, we often see that co-creation is confused with collaboration (see the table below created by the authors).

The key differences between the two can be found in the definition above: breaking down power structures and decision-making. Without time and resources dedicated to those aspects, attempts at co-creation become more like collaboration.

A table outlining the differences between consultation, collaboration, and co-creation. Using co-creation to center young people as experts in their own digital futures

In partnership with the Mastercard Foundation, Caribou Digital researched young people’s experiences with digital technologies in Africa, selecting 20 young people from across seven countries to co-create with. They included young people whose stories are not often seen or heard, such as women, people living with disabilities, refugees, and those living in rural areas.

The research team recognized that, despite good intentions, power imbalances would exist among the young people, the Mastercard Foundation, and Caribou Digital. These would hinder important insights that could lead to more strategic and relevant recommendations.

From the outset, we created an environment to alleviate these power imbalances. The co-creation process involved treating the young people as experts whose stories shaped the report, emphasizing collaboration and flexibility. This approach was outlined in the Terms of Reference, which each young person signed at the beginning of the project. At the first video conferencing session, expectations were aligned and rules of engagement were set. The young people reviewed and provided feedback on the research coding framework, shaping the language and direction of the project. Video conferencing sessions to share experiences were made inclusive and accessible, with flexible post-session reflection assignments to accommodate all needs. During the report-writing phase, panelists reviewed drafts, edited their quotes, and provided feedback, culminating in a discussion on how best to present the final report.

In reflecting on our co-creation process, three core learnings emerged.

Lesson #1: Storytelling and reflection assignments yield richer data in a non-extractive way.

Rather than extract young people’s experiences through various data collection methods, we used storytelling and reflection assignments to co-create this research. From the beginning, Caribou Digital emphasized that the young people were the experts. Their stories were the foundation of the report; our role was to facilitate and listen. The online video conference format allowed the young people to build on one another’s experiences, feel validated, and connect in a non-extractive process. Post-session reflection assignments (for example, asking the young people to reflect on how digital technologies have impacted their choice and agency) allowed them to reflect on their own and in a convenient mode (audio message or email). Providing feedback on the research process, one young person shared, “The room was always accommodating of all of us who wanted to speak, and the moderators were tolerant of our views. I felt [at] home to speak/write from the reality of my experience.”

Lesson #2: Double the time and resources needed for co-creation.

Co-creation required more time, planning, and resources than initially thought. Every video conference session required thoughtful preparation to ensure a welcoming and inclusive environment — from the slide deck to the video captions. Reflection assignments and video recordings were analyzed carefully to ensure they accurately represented the young people’s experiences. Extra time was needed for the young people to review report drafts, edit quotes, and expand on their experiences. A safe estimate for others looking to use this co-creation approach would be to double the time and human resources needed.

Lesson #3: Accountability, transparency, and flexibility are key co-creation ingredients.

It was important for Caribou Digital to develop a trusted working relationship with the young people to keep them engaged throughout the research process. We were accountable when things weren’t working well and shared how the young people’s feedback was incorporated into the report. We were transparent with expectations for the research and when honorarium payments were delayed. We were flexible when the young people couldn’t provide feedback on time or attend a video conference session due to busy schedules. These practices kept the young people engaged throughout the research process. When asked to provide anonymous feedback on the research process, one participant shared, “[Caribou] was always in touch both in the Zoom session and WhatsApp to guide in case anything wasn’t right. […] We also had timely reminders for the meetings, and at no point was I caught offside or unaware of a meeting.”

Catalyzing research with co-creation

When done well, co-creation is an incredibly powerful practice that can elevate and amplify marginalized voices and improve the quality of research products. Our co-creation journey with these 20 young people was enriching and insightful, underscoring the value of trust and transparency.

By prioritizing youth voices and experiences, the 20 young people, Caribou Digital, and the Mastercard Foundation crafted a powerful report that reflects young people’s perspectives and experiences on digital technologies in Africa. One young person shared, “I feel like [co-creation] is a good approach because it lends to the authenticity of the report since these are our lived experiences […] It also makes the report relatable to fellow youth especially.”

Caribou Digital is committed to continuing this approach and conducting more co-created research. If you’re interested in participating in such initiatives or have ideas for collaboration, we invite you to connect with us at chelsea@cariboudigital.net.

Breaking down power imbalances through co-creation was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


HYPR

What Is Phishing-Resistant MFA and How Does it Work?

Phishing, despite its somewhat innocuous name, remains one of the foremost security threats facing businesses today. Improved awareness by the public and controls such as multi-factor authentication (MFA) have failed to stem the tide. The FBI Internet Crime Report puts phishing and its variants (spear phishing, smishing, vishing) as the top cybercrime for the last five years, and the

Phishing, despite its somewhat innocuous name, remains one of the foremost security threats facing businesses today. Improved awareness by the public and controls such as multi-factor authentication (MFA) have failed to stem the tide.

The FBI Internet Crime Report puts phishing and its variants (spear phishing, smishing, vishing) as the top cybercrime for the last five years, and the advent of generative AI has only added fuel to the fire. Using ChatGPT and other tools, hackers can quickly create personalized messages, in local languages, to launch widespread, highly effective phishing campaigns.

In the last six months alone, malicious emails have increased by 341%, prompting industry experts to urge organizations of all sizes to implement phishing-resistant MFA.

So, what is phishing-resistant MFA and how does it differ from traditional MFA? In this article, find phishing-resistant definitions and use cases, and learn why it’s the safest option for organizations.

What is Phishing?

Phishing is a method of attack used by malicious actors that involves deceiving users into installing malware or revealing sensitive information such as passwords, payment card and social security numbers. With this information they can take over accounts, sell the information on the dark web, steal identities and even access internal systems and networks of an organization. 

Common phishing attacks include:

Email phishing: Attackers send emails, typically with malicious links or attachments that steal sensitive data from users.  Whale and spear phishing: Similar to email phishing, whale and spear phishing are more targeted and aimed at specific, typically high-profile people in the organization (e.g. CEO or other executive).  Smishing and Vishing (voice phishing): Smishing uses SMS messages while vishing uses either a mobile or landline, combining it with social engineering attacks.   Domain phishing/impersonation: Attackers typically pretend to be well-established brands to gain users’ trust and divulge sensitive information.  Malicious attachments: Attachments contain malware that infect systems and can trigger ransomware or other attacks that steal sensitive data.  What is Multi-Factor Authentication?

Multi-factor authentication requires at least two independent factors, knowledge, or something you know (e.g., password, PIN, security question), possession, or something you have (e.g., OTP code, device), and inherence, or something you are (e.g., fingerprint or other biometric marker). 

It is different from two-factor authentication (2FA) in that 2FA requires an additional verification besides your username and password, but it doesn’t require it to be from a different authentication category like with MFA.

Phishing-Resistant MFA Overview

Phishing-resistant authentication does not use shared secrets at any point in the login process, eliminating the attacker's ability to intercept and replay access credentials and hardening the authentication process so that it cannot be compromised by even the most sophisticated phishing attacks. Passwordless MFA based on FIDO standards is considered the gold standard for phishing-resistant authentication by the OMB and other bodies.

Phishing-resistant MFA is based on public/private key cryptography and follows the guidelines published by the OMB in its M-22-09 Federal Zero Trust Strategy memorandum and the requirements for “verifier impersonation resistance” outlined by the National Institute of Standards and Technology (NIST) in SP 800-63-3.  

The Problem With Traditional MFA

There are two different problems when it comes to traditional MFA. The first is that it causes friction, both for employees who use it to access accounts and consumers who want to make their purchases quickly. 

The second problem is a security issue. Unfortunately, the most common second factor in traditional MFA is “something you have” in the form of an SMS or OTP. Like passwords, these verification methods are highly vulnerable to phishing as well as MitM (Man-in-the-Middle) attacks. In order for MFA to resist phishing, it cannot rely on the use of SMS, OTPs, or identification attempts through voice calls or interceptable push notifications.

Why Phishing-Resistant MFA is the Gold Standard

A better solution is FIDO or PKI-based passwordless authentication. These phishing-resistant MFA methods remove the vulnerabilities that undermine traditional MFA, including any use of a “something you know”’ factor as these are the target of the majority of phishing attacks.

Phishing-resistant MFA does not use any of these weaker authentication factors. It uses a strong possession factor in the form of a private cryptographic key (embedded at the hardware level in a user-owned device) and strong user inherence factors such as touch or facial recognition. Equally important, the backend authentication process does not require or store a shared secret.

Since 2022, CISA, the Cybersecurity and Infrastructure Security Agency, has strongly recommended that all organizations implement phishing-resistant MFA based on FIDO standards. This is considered the gold standard for phishing-resistant authentication by NIST (800-63B), the FFIEC, the OMB and other cybersecurity statutes.

Phishing-resistant MFA flow

Breaking Down Phishing-Resistant Multi-Factor Authentication

Phishing-resistant multi-factor authentication defends against attackers who are looking to bypass authentication controls. This more advanced level of security involves various technologies and processes, which can be implemented in a number of ways.

Strong Authentication

A hallmark of phishing-resistant MFA is strong authentication that provides a robust defense against phishing and other targeted attacks. A somewhat broad concept, it involves using secure cryptographic protocols and two or more authenticating factors that include proof of device possession as well as user biometrics.

Passkeys

Passkeys replace passwords and secrets with cryptographic key pairs and on-device biometrics for faster, easier, and more secure sign-ins to websites and apps. Unlike passwords, passkeys are always strong and phishing-resistant. Passkeys can be either synced or device-bound. Synced passkeys are the standard passkeys offered by Apple, Microsoft, Google and others.

The private key is securely stored in a vault, such as the OS keychain or a password manager, and can be synced between devices. Device-bound passkeys, by contrast, are stored on a specific hardware device and cannot be shared with other devices.

Security Keys 

Security keys are physical devices that store cryptographic keys, but they can be either hardware or software-based. Software-based keys might be stored and integrated into mobile devices, for example, whereas hardware keys are physical devices that store cryptographic keys. However, this method has limitations as it can easily be lost or stolen and challenging to recover.

Biometric Authentication

Biometric authentication focuses on biological methods of identification such as fingerprints or face recognition to verify identity for the inherence (e.g. “something you are”) authentication factor. It is often integrated into devices such as mobile phones or computers. 

Adaptive Authentication 

While not technically an element of phishing-resistant MFA, adaptive authentication enforces verification of identity based on the user’s context and risk. For example, it would have a different process based on the user’s location (e.g. home or work) and device (e.g. phone or work computer). 

The Cost of Phishing Attacks

Phishing plays a role in various types of attacks. According to the 2023 Verizon Data Breach Investigations Report, phishing accounted for 44% of social engineering breaches, with the median amount stolen from Business Email Compromise alone averaging $50,000. It’s also a key initial attack vector in credential stealing, allowing hackers to initiate fraudulent transactions, deliver malware including infostealers and ransomware and gain an authenticated foothold from which they can move laterally within the system.

The Cost of a Data Breach 2024 report by IBM estimates that the average cost of a data breach is $4.88 million, an increase of 10% from the year before. Unfortunately, the go-to mitigation to prevent phishing, namely adding traditional MFA, has proven inadequate. Sometimes they are even used as part of the attack itself. 

Most multi-factor authentication solutions feature a password as one of the verification factors. The additional authentication factor generally is a one-time password (OTP) sent by voice, SMS, or email, or a push notification via an authenticator app that the user must accept.

Today, automated phishing kits that can circumvent these methods are readily available to hackers. Cybersecurity experts claim that over 90% of all multi-factor authentication is phishable. Due to these MFA vulnerabilities and the threat posed by phishing, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Government Office of Management and Budget (OMB), as mentioned above, have specifically called for phishing-resistant MFA. 

Why Organizations Need to Prioritize Phish-Resistant Authentication

While the need for phishing-resistant MFA has been apparent for some time, and was a key driver for establishing the FIDO Alliance, the generative AI trend and ChatGPT in particular has kicked this into overdrive. Cybercriminals now have the ability to send massive numbers of highly targeted phishing attacks using dark web ChatGPT counterparts such as FraudGPT and WormGPT.

According to Slashnext’s State of Phishing 2024 Mid-Year Assessment, there has been a 4151% increase in malicious emails since the advent of ChatGPT in late 2022. 

As phishing attacks have increased, so has the incidence of account takeover (ATO),  leading to a number of potential consequences for targeted organizations, including supply chain fraud, data theft and the installation of ransomware and other malware. Attackers can also use the hijacked account of one user to escalate attacks within the organization by sending malicious emails from a trusted user.

Multi-factor authentication has proven ineffective against modern phishing campaigns, which are able to phish both the initial login credentials and the second factor. For example, a phishing message might direct the victim to a proxy website while the attacker acts as a man-in-the-middle to steal both the password and OTP code.

This is only one of many tactics cybercriminals use to compromise multi-factor authentication that uses OTPs or SMS. Others include running legitimate versions of websites on their own servers, using robocalls to convince users to hand over codes and SIM-swapping, so messages are sent to an attacker’s phone.

The skyrocketing number of phishing attacks in general, accompanied by sophisticated tactics that can circumvent common authentication checks, means that phishing-resistant MFA is no longer optional. Instead, it is the only choice to keep employees and organizations safe from the vast majority of phishing threats.

How to Choose a Phishing-Resistant MFA Solution

When considering a phishing-resistant MFA solution, you’ll want to ask about its ability to completely remove shared secrets (passwords, OTPs), its support for multiple devices (e.g. desktop and mobile), and its ability to reduce friction for the user experience.

For example, does it secure authentication for remote workers and work offline? Is it intuitive and easy for new users to learn? You’ll also want to verify how long it takes to deploy across your organization and if it integrates with major identity providers (IdPs). Finally, you’ll want to make sure its FIDO Certified and achieves compliance with Zero Trust architecture and regulatory obligations. 

Considerations When Implementing Multi-Factor Authentication

Implementing multi-factor authentication within your organization involves a few different factors to evaluate:

Security strength: Although MFA typically protects against brute force attacks, some types of authentication are subject to phishing attacks. To ensure the highest level of security, however, you’ll want to consider phishing-resistant MFA that is FIDO-compliant. Cost: You’ll need to evaluate the costs of the solution, which include not only setup and user training but ongoing maintenance costs. Keep in mind that while some solutions might cost more, they may also deliver better security and be easier for your team to implement. Some solutions may also impact productivity at the time of deployment, so that might be a consideration. Flexibility: Users want a number of different options available for MFA. Check that your solutions offer different methods of authentication, such as verification via a mobile application or hardware keys to adjust to the needs of different users and environments. Scalability: Can the solution adapt to the changing needs of your organization? Can it handle a workforce that is remote? Does it offer MFA for networks, servers, and cloud infrastructure? 

Learn how to evaluate passwordless security solutions

HYPR's Phishing-Resistant MFA Solution

It’s clear that phishing-resistant MFA is critical, but what does it look like in practice? HYPR’s Passwordless MFA solution is based on the FIDO standards and provides phishing-resistant authentication from desktop through to cloud applications, no matter where your workforce is located.

HYPR leverages public key cryptography to allow for secure authentication that fully eliminates the use of shared secrets between parties. Just as importantly, the HYPR platform is easy to deploy and makes logins fast and easy for the user. Complicated sign-in processes are one of the biggest reasons that people take shortcuts or use unsafe practices that criminals exploit. 

To learn more about passwordless security and phishing-resistant MFA, read our Passwordless 101 guide.

FAQs

What is the difference between passwordless and phishing resistant MFA?
Not all passwordless MFA is phishing-resistant or indeed really passwordless. OTP codes, after all, are a form of password. A solution that uses any kind of shared secret can still be compromised by phishing, man-in-the-middle and other attacks that target credentials. Phishing-resistant MFA, on the other hand, ensures that even if users are targeted with phishing attacks, there are no credentials available to steal and their authentication remains secure.

What are the benefits of phishing resistant MFA?
Phishing-resistant MFA delivers a number of benefits to the user. First, it delivers a friendly user experience that eliminates the friction involved in the traditional MFA process. Second, it provides a higher level of security than two-factor authentication or traditional multi-factor authentication. 

Can phishing bypass 2FA?
Yes, phishing can bypass 2FA using a number of different methods such as man-in-the-middle attacks, password resets and social engineering attacks. This is because most 2FA verification methods involve one-time passwords (OTP) via email or SMS, which can be easily intercepted.

Why are passkeys phishing resistant?
Passkeys are phishing resistant as they are based on FIDO standards which were designed to resist phishing as well as some other forms of attack. They consist of cryptographic key pairs, which are registered to a specific authenticating service, ensuring that the passkey only works with the exact domain name of the service. There are no passwords or shared credentials to phish and a spoofed site cannot use them.

Editor's Note: This blog was originally published May 2022 and has been completely revamped and updated for accuracy and comprehensiveness.


KuppingerCole

Evidian Orbion IDaaS solution

by Martin Kuppinger This KuppingerCole Executive View report examines Evidian Orbion, the next-generation IDaaS solution from Evidian. Orbion provides a comprehensive, integrated approach to Identity as a Service (IDaaS), addressing all major areas of Identity and Access Management (IAM) beyond just the workforce. This report includes a technical review of the solution Evidian Orbion.

by Martin Kuppinger

This KuppingerCole Executive View report examines Evidian Orbion, the next-generation IDaaS solution from Evidian. Orbion provides a comprehensive, integrated approach to Identity as a Service (IDaaS), addressing all major areas of Identity and Access Management (IAM) beyond just the workforce. This report includes a technical review of the solution Evidian Orbion.

Microsoft Entra ID Governance

by Martin Kuppinger This KuppingerCole Executive View report looks at Microsoft Entra ID Governance, the IGA (Identity Governance & Administration) solution within the Microsoft Entra portfolio. Microsoft Entra ID Governance is delivered as IDaaS (Identity as a Service). It allows simple and fast deployment of IGA capabilities with a good set of capabilities serving the requirements of a wide

by Martin Kuppinger

This KuppingerCole Executive View report looks at Microsoft Entra ID Governance, the IGA (Identity Governance & Administration) solution within the Microsoft Entra portfolio. Microsoft Entra ID Governance is delivered as IDaaS (Identity as a Service). It allows simple and fast deployment of IGA capabilities with a good set of capabilities serving the requirements of a wide range of customer use cases.

Sunday, 15. September 2024

KuppingerCole

Beyond ChatGPT: AI Use Cases for Cybersecurity

How can artificial intelligence be used in cybersecurity? Matthias and Alexei asked ChatGPT exactly this question and it came up with quite a list of use cases. They go through this list and discuss it. They explore the different forms of AI aside from generative AI, such as non-generative AI and traditional machine learning. They highlight the limitations and risks associated with large language

How can artificial intelligence be used in cybersecurity? Matthias and Alexei asked ChatGPT exactly this question and it came up with quite a list of use cases. They go through this list and discuss it. They explore the different forms of AI aside from generative AI, such as non-generative AI and traditional machine learning. They highlight the limitations and risks associated with large language models like GPTs and the need for more sustainable and efficient AI solutions.

The conversation covers various AI use cases in cybersecurity, including threat detection, behavioral analytics, cloud security monitoring, and automated incident response. They emphasize the importance of human involvement and decision-making in AI-driven cybersecurity solutions.

Here's ChatGPT's list of AI use cases for cybersecurity:

AI for Threat Detection: AI analyzes large datasets to identify anomalies or suspicious activities that signal potential cyber threats. Behavioral Analytics: AI tracks user behavior to detect abnormal patterns that may indicate compromised credentials or insider threats. Cloud Security Monitoring: AI monitors cloud infrastructure, detecting security misconfigurations and policy violations to ensure compliance. Automated Incident Response: AI helps automate responses to cyber incidents, reducing response time and mitigating damage. Malware Detection: AI-driven solutions recognize evolving malware signatures and flag zero-day attacks through advanced pattern recognition. Phishing Detection: AI analyzes communication patterns, spotting phishing emails or fake websites before users fall victim. Vulnerability Management: AI identifies system vulnerabilities, predicts which flaws are most likely to be exploited, and suggests patch prioritization. AI-Driven Penetration Testing: AI automates and enhances pen-testing by simulating potential cyberattacks and finding weaknesses in a network. Anomaly Detection in Network Traffic: AI inspects network traffic for unusual patterns, preventing attacks like Distributed Denial of Service (DDoS). Cybersecurity Training Simulations: AI-powered platforms create dynamic, realistic simulations for training cybersecurity teams, preparing them for real-world scenarios. Threat Intelligence: NLP-based AI interprets textual data like threat reports, social media, and news to assess emerging risks. Predictive Risk Assessment: AI assesses and predicts potential future security risks by evaluating system vulnerabilities and attack likelihood.


DHIWay

Decentralized Identity: It’s Not What You Think

In an increasingly digital world, proving who we are has never been more critical or misunderstood. The conversation around decentralized identity often suggests that it will replace the systems we’ve relied on for so long, tearing down the old to make way for the new. But that’s not the reality. These identity models aren’t adversaries […] The post Decentralized Identity: It’s Not What You Thin

In an increasingly digital world, proving who we are has never been more critical or misunderstood. The conversation around decentralized identity often suggests that it will replace the systems we’ve relied on for so long, tearing down the old to make way for the new. But that’s not the reality. These identity models aren’t adversaries locked in a battle for dominance; they are complementary forces that, when combined, can create a more secure, flexible, and empowering future for us all.

Think about it: our identity isn’t just a name, an ID card, or a social media profile. It’s a complex web of credentials, reputations, and relationships rooted in something deeply personal and sovereign—the name given to us at birth. This idea of identity is naturally decentralized. Yet, in today’s digital world, we are forced to rely on borrowed identifiers—like email addresses, mobile numbers, and social media accounts—that leave us vulnerable and powerless.

What if we could reclaim that sense of sovereignty in the digital realm? Imagine having a digital identity as uniquely ours as our name—one that we fully own and control, without ever compromising our privacy or security.

To bring this vision to life, we must rethink digital identity—not as a choice between centralized or decentralized systems, but as a fusion of their strengths. When these two approaches unite, they create a powerful framework of trust that offers more security, flexibility, and empowerment than either could achieve alone.

The Nature of Identity: Rooted in Sovereignty

To understand the future of digital identity, we need to start with a simple but powerful truth: our identities are inherently sovereign. From the moment we are born, our identities begin with our names—given to or chosen for us, not issued by any central authority. These names belong to us, and only us. Over time, they become associated with a rich tapestry of experiences, accomplishments, and relationships that form our reputations.

In the physical world, we build our identities by linking credentials to our names—birth certificates from governments, diplomas from universities, and membership cards from professional organizations. Each of these credentials contributes to the reputation of our names, like threads weaving together the fabric of who we are. No single entity controls all these threads; they come from diverse sources, adding depth and nuance to our identities.

But in the digital realm, this natural decentralization begins to unravel. Online, our identities are often reduced to borrowed credentials—an email address from a tech company, a social media profile, or a phone number managed by a telecom provider. Third parties control these digital identifiers, and don’t truly belong to us. They can be revoked, altered, or exploited without our consent.

What’s more, we lack control over our data. In the current model, we are compelled to hand over vast amounts of personal information to third parties for authentication and authorization. This means our data—our actions, preferences, and relationships—ends up in centralized databases that are often opaque and vulnerable. We have little say over how this data is collected, used, shared, or sold, making us passive participants in our digital lives.

This brings us to a critical realization: our current digital identities do not reflect the sovereignty and flexibility of our real-world selves. Instead, they are fragmented and vulnerable, exposed to misuse and exploitation, and ultimately subject to the control of entities whose interests may not align with ours.

But what if our digital identities could be as sovereign and flexible as the names we were given at birth? What if we could build digital reputations similarly—by linking credentials to identities we fully own and control? This is where the concept of cryptographic identifiers—a new digital foundation—comes into play.

The Core of Digital Identity: A Key Pair as Our Digital Name

Public key cryptography, a cornerstone of digital security for decades, lays the groundwork for a digital identity we truly own and manage ourselves. It revolves around a pair of cryptographic keys: a private key known only to us and a public key, which we can share with others. This key pair becomes the digital root of trust—an anchor for our online identity that remains under our control alone.

Think of the private key as our personal signature, kept secret and secure, while the public key acts like our digital name—something we can share openly and widely. Together, they create a powerful method to authenticate who we are online, without relying on any third-party provider. Just like the names given to us at birth, our digital key pair is unique and completely within our control.

But how does a key pair build trust? Here’s where it gets interesting.  Just as our real-world name gains recognition and credibility through our experiences, accomplishments, and relationships, our digital identity earns its reputation through credentials tied to our key pair. These credentials—whether issued by a government, a university, or a professional organization—are cryptographically signed and secured.

What makes this powerful is that these credentials are verifiable at any time by anyone who needs to confirm our identity, qualifications, or achievements—without ever having to return to the original issuer. This instant, trust-based verification protects our privacy. It empowers us to build and present our digital reputation with the same confidence and autonomy we enjoy in the physical world.

Building Our Digital Reputation: The Key Pair in Action

Think of our digital key pair as a blank canvas, ready to be filled with the credentials that define us. Over time, we can attach verifiable credentials to this key pair—our digital driver’s license, a degree from our university, or proof of employment from our company. Each of these credentials contributes to our digital reputation, enabling us to build trust without giving up control.

Imagine needing to prove our professional qualifications to a potential employer. Instead of submitting physical documents or scans, we present a set of digital credentials tied to our key pair. The employer can instantly verify these credentials, thanks to cryptographic proofs that confirm the appropriate authorities issued them. No lengthy checks or third-party databases are required—just immediate, secure trust.

This concept extends beyond professional credentials. Suppose we need to access an age-restricted service online. Rather than disclosing our full name, date of birth, and address, we can provide a signed cryptographic proof that simply confirms we meet the age requirement without revealing any other personal information. The service provider trusts this proof because it is tied to our key pair and backed by verifiable credentials issued by trusted entities.

Anchoring Identity with Multiple Key Pairs: Flexibility and Context

The power of a decentralized digital identity doesn’t stop with a single key pair. We can have multiple key pairs for different contexts—each serving a specific purpose or representing a unique aspect of our digital selves. For example, one key pair might be used for professional credentials, while another could be designated for personal interactions or healthcare records. This flexibility allows us to maintain privacy and security across various domains, ensuring that only relevant information is shared with the appropriate parties.

The World Wide Web Consortium (W3C) Decentralized Identifier (DID) standard makes adopting this approach feasible across different systems and platforms. DIDs enable us to create and manage multiple digital identities, each anchored by its cryptographic key pair, in a way that is interoperable and recognized by various services and organizations worldwide.

Owning Our Digital Identity: A New Paradigm

We reclaim sovereignty over our online lives by anchoring our digital identity to a key pair that only we control. We decide which credentials to share, with whom, and for how long. This approach fundamentally shifts the power dynamics, allowing us to build and manage our digital reputation just as we do in the real world—by accumulating trusted credentials over time.

This doesn’t mean eliminating centralized systems; instead, it integrates them into a more flexible, user-centric model. Governments, universities, banks, and other institutions continue to issue credentials, but now they do so in a way that respects our control over our identities. This isn’t about replacing one system with another; it’s about creating a bridge that combines the best of both worlds, where centralized trust meets decentralized control.

A Future Anchored by Sovereignty and Flexibility

The promise of a truly self-sovereign digital identity is no longer a distant dream. By combining the strengths of cryptographic technology and decentralized frameworks like DIDs, we can create a new digital identity paradigm that respects our privacy, protects our data, and places control back in our hands. This isn’t about tearing down existing systems; it’s about enhancing them, building bridges, and creating a digital future where our identities are secure, trusted, and uniquely ours.

With cryptographic key pairs and the W3C DID standard as the anchors of this new approach, we move towards a future where our digital identities are as secure, private, and flexible as our real-world selves. The journey starts now, with each of us reclaiming the power to own and manage our digital selves, navigating the digital realm with confidence and autonomy.

The post Decentralized Identity: It’s Not What You Think appeared first on Dhiway.


PROPERTY TOKENIZATION – REVISITING THE WHY BEHIND DEMATERIALISATION

The overall goal is to use technology to address India’s property-related legal and economic challenges. The Indian real estate market is a unique one, governed by countless laws, regulations, and state-level amendments which control, and prohibit, the purchase of land by non-domiciled Indian residents. As a rough rule of thumb, foreign nationals who do not […] The post PROPERTY TOKENIZATION –&n
India’s real estate market is complex, with strict regulations on property ownership. Land disputes are a major issue, accounting for 66% of civil cases and causing significant economic drain. Poor record-keeping and outdated land titles contribute to these disputes. The document proposes using blockchain technology and Verifiable Credentials (VCs) to create a more efficient, transparent, and secure system for managing land records and resolving disputes. Real estate tokenization is emerging as a solution, allowing fractional ownership and increased liquidity. A partnership between Rooba.Finance and Dhiway aims to combine asset tokenization and blockchain technology to innovate in this space.

The overall goal is to use technology to address India’s property-related legal and economic challenges.

The Indian real estate market is a unique one, governed by countless laws, regulations, and state-level amendments which control, and prohibit, the purchase of land by non-domiciled Indian residents. As a rough rule of thumb, foreign nationals who do not reside in India cannot have property registered in their names. PIOs and NRIs are restricted from buying agricultural, plantation, farm and other such land, though they are not prohibited from purchasing, selling or inheriting residential or commercial land save for one caveat – some states prohibit non-domiciled individuals from purchasing land of any type. 

An indicative list of central laws that govern the purchase of land follows:

Transfer of Property Act, 1882 Registration Act, 1908 Indian Stamp Act, 1899 Real Estate (Regulation and Development) Act, 2016 Benami Transactions (Prohibition) Act, 1988 Foreign Exchange Management Act (FEMA), 1999

For NRIs to purchase residential property, the following documents are necessary:

Passport and/or OCI Card PAN Card PoA registered for the specific transaction, if the NRI is not physically available for registration.

As regards agricultural land, all NRIs and PIOs are prohibited from purchasing it, though there is no bar on inheritance. However, in many states, even resident Indian citizens face restrictions relating to the purchase of land, or conversion of agricultural land to N.A. land by mutation. 

The long and short of it is, that India makes it hard to buy real estate, makes you undergo stringent documentation and has, for all intents and purposes, a set of federal and state level laws in place to adapt to its diversity. 

Despite this extensive legal system in place, an estimated 7.7 million people in India are affected by conflict over 2.5 million hectares of land, threatening investments worth more than Rs 14 lakh crore. Since land is central to India’s developmental trajectory, finding a solution to land conflict is a crucial policy challenge for the Indian government. Land disputes account for the largest set of cases in Indian courts – 25 percent of all cases decided by the Supreme Court involved land disputes, and surveys suggest that 66 per cent of all civil cases in India are related to land or property disputes. The average pendency of land acquisition cases, from  creation to resolution in the Supreme Court, is 20 years on average. Some reports indicate that more than two-thirds of litigation pertains to property. 

Data around Supreme Court (SC) cases is alarming. Cases pertaining to property  that manage  to reach the Apex Court at ‘Special Leave Petition’ or ‘Leave to Appeal’ stages are a mixed bag, ranging from land acquisition to conventional title disputes. To put it into perspective, the pecuniary jurisdictions of most states’ district courts have been raised to unlimited to ensure that High Courts do not get clogged by litigation. Up until 2015, litigants could approach High Courts directly to file property cases concerning properties over a certain value. Now, commercial disputes must all go to district courts at first, and require mandatory mediation in order to prevent lis (legal dispute) from being joined in the first place. Despite this, there is an alarming rate of litigation prevalent across all asset-value classes. This trigger-happy litigious mentality has ramifications beyond protracted pendency of cases. Individuals from lower socio-economic strata are unable to receive justice due to pendency in courts. Since they are unable to access quality legal advice, they often spend as long as 20 years or more litigating, generally on questions of title and devolvement of title. In principle, the Supreme Court must only deal with disputes concerning questions of law that have not been settled or require revisiting or interpretation. Broadly speaking, disputes with the highest incidence of percolating to the SC are Land Acquisition cases. By and large, as indicated by the figures above, 66% of all pending courts cases comprise property-related disputes, which can be bifurcated into private and against the state (land acquisition). Private disputes (between private parties, juristic or natural), can be further divided into those involving the title (competing title interests or encroachment) and those relating to devolvement (wills).

 

Cases which are not mediated or settled result in litigation, which has two economic outcomes. The first is that litigants lose money in hefty legal fees and the other is that the economy is detrimentally affected due to assets being locked in encumbrance. Without proposing some utopic litigation–free universe, what all can technology solve in such a status quo?

By 2040, real estate market will grow to Rs. 65,000 crore (US$ 9.30 billion) from Rs. 12,000 crore (US$ 1.72 billion) in 2019 and contribute 13% to the country’s GDP by 2025. Retail, hospitality, and commercial real estate are also growing significantly, providing the much-needed infrastructure for India’s growing needs. The problems at hand are economic drain to the people, a judicial strain to the infrastructure and is resulting in a lack of access to justice. 

The solution? Verifiable provenance through digital records. Over the last decade, concerted efforts have been made to shift towards building and deploying Digital Public Infrastructure to solve the problems pertaining to data within India. Currently, the lack of trustworthy records accounts for a significant amount of litigation as well as the inability of government schemes to function. There are significant errors and discrepancies in the maintenance logs of land records. In a study conducted in Rajasthan, in 24 percent cases, the difference between the area on record and the area measured was more than 20 percent. To compound this, land titles are often considered presumptive, meaning that the person currently occupying the land is assumed to be its owner. The same study revealed that the state ceased maintaining records of land possession in 1972, and there is no data on land possession at the tehsil level. As a result, title records are frequently outdated; the registered owner might have died or sold the property without updating the records, making it challenging to determine current ownership. 

Private disputes pertaining to joint ownership also take root in poor record-keeping. It gets particularly tricky when succession cases are instituted well into the future, sans any verifiable records. In India, devolvement follows religious or custom-based inheritance by default, unless expressly revoked by a will, thereby choosing testamentary succession (a quagmire of litigation in itself). All this has a detrimental impact on the ease of doing business rankings, specifically in respect of contract enforcement and property registration. India is currently ranked 163rd and 166th, respectively, on the abovementioned fronts. Both these factors, once again, are greatly affected by India’s persistent problem: an overwhelming number of land litigations.

In the early 90s, humanity was at the dawn of personal computing and the era of the internet. Juxtaposed to this groundbreaking advancement, India witnessed one of the largest scale financial frauds ever, the Harshad Mehta Scam. In this backdrop, the Securities and Exchange Board of India (SEBI) identified authenticity of securities as a paramount concern, and a hole to be plugged. By 1996, demat was mandated across public securities markets, ushering in an era of depositories, clearing corporations, registrar-cum-transfer agents and stock exchanges. SEBI used regulated intermediaries to ensure the safety and security of individuals participating in India’s securities markets. 

Till date, some sectors of financial markets, such as private markets, have been left largely untouched by digitisation or dematerialisation. This has resulted in information asymmetry and data silos, culminating in opaque markets, inefficiencies in transactability and a lack of trust. At this juncture, we need to look towards innovative technology solutions to improve the sourcing, sharing and verification of data which assists the public in making financial decisions. At present, in 2024, we are witnessing increasing use cases of DLT and AI, and it seems only fitting that as we consider the evolving avatar of the internet, we must adopt and adapt or risk being mired in legacy market inefficiencies. In recent years, real estate tokenization has emerged as an unconventional investment option with advantages for both issuers and investors. The real estate sector now makes up about 40% of the digital securities market, amounting to approximately $200 million. Real estate tokenization typically turns a property’s value into a token that can be transferred and owned digitally by storing it on a blockchain. These fractional shares of ownership in the real estate are represented by these divisible tokens. A reliable database is necessary for private markets to become more liquid. Instead of being centralised, we think that this new database will be distributed and owner-controlled.

So, how does the Finternet Project and its contributors aim to solve this population-scale problem of verifiable data? 

The vision of the Finternet is to build a set of rails for a user-centric ecosystem that unifies various fractured and siloed ecosystems using universal principles translated through technology. In the narrow compass of real estate, availability of authenticated data relating to property will unlock the hidden financial potential of a traditionally illiquid asset, remedying a major cause of litigation in India. 

Verifiable Credentials

Finternet can revolutionise the administration and evidence process for dispute-resolution by integrating advanced digital tools and decentralised technologies. Through blockchain, it ensures that records and evidence are digitised and immutable, providing a reliable and tamper-proof source of truth. Verifiable Credentials (VCs) allow for instant authentication and verification of evidence, streamlining the process and ensuring authenticity. Real-time data access and transparency are enhanced, allowing for quicker decision-making. 

VCs are digital certificates that can be used to prove the authenticity of information regarding an individual, organization or an asset. These credentials are stored securely and can be presented and verified in a decentralised manner, without the need for intermediaries. VCs are particularly useful in scenarios where trustworthiness is a priority, like in the case of property disputes.

In the context of property, verifiable credentials can be employed to:

Authenticate Property Ownership: VCs can be issued by government authorities or trusted entities to certify ownership of a property. These credentials can be cryptographically verified by any party, ensuring that the ownership claim is legitimate and reducing the likelihood of fraudulent claims. Streamline Property Transfers: During property transfers, VCs can be used to verify the identities of the parties involved, as well as the authenticity of the property title. This can significantly reduce the time and cost associated with the transfer process, as it eliminates the need for extensive paperwork and third-party verification. Resolve Title Disputes: In cases where there is a dispute over property ownership, VCs can serve as tamper-proof evidence of ownership history. The use of VCs can expedite the resolution process by providing courts or arbitration bodies with a clear, verifiable record of ownership, thus reducing the duration and complexity of litigation. Improve Transactability: By using VCs, all parties involved in a property transaction can have access to verified and up-to-date information. This transparency helps in faster business decisions such as loans-against-property, home loans, credit decisions, etc.  Integrate with Smart Contracts: VCs can be integrated with smart contracts to automate the execution of agreements based on verified conditions. For instance, a smart contract could automatically release payment upon the verification of a property transfer credential, ensuring that both parties fulfill their obligations.

By leveraging VCs within the property sector, India can move towards a more efficient, transparent and secure system of managing land records and resolving disputes. This technology has the potential to reduce the burden on the judiciary, minimise economic losses due to encumbered assets, and enhance the overall ease of doing business in the country.

Conclusion

The Indian real estate market faces significant challenges due to complex regulations, widespread land disputes, and outdated record-keeping systems. These issues result in economic inefficiencies, overburdened courts, and barriers to investment and development.

However, emerging technologies offer promising solutions to these long-standing problems. The integration of blockchain technology, Verifiable Credentials, and asset tokenization has the potential to revolutionize property management and transactions in India. By creating a more transparent, secure, and efficient system for recording and verifying property ownership, these innovations could:

Reduce the number of property-related disputes Streamline property transfers and reduce associated costs Improve access to justice by providing clear, verifiable records Enhance the liquidity of real estate assets through tokenization Attract more investment to the real estate sector

The path forward involves continued development of these technologies, their integration into existing legal and administrative frameworks, and widespread adoption by stakeholders in the real estate sector. While challenges remain, the potential benefits of this technological revolution in property management are substantial and could transform India’s real estate landscape in the coming years.

Dhiway and Rooba alliance

Rooba.Finance and Dhiway are strategically collaborating to harness their respective strengths in asset tokenization and blockchain technology, driving innovation in the financial and property sectors. Rooba.Finance, with its expertise in asset tokenization, is pioneering the creation of digital representations of real-world assets, allowing for fractional ownership and enhanced liquidity in the market. Dhiway, a leader in blockchain-based infrastructure, provides the robust, secure, and transparent technology backbone necessary to support these digital assets. By integrating Dhiway’s advanced blockchain solutions, Rooba.Finance ensures that each tokenized asset is securely documented, traceable, and compliant with regulatory standards. This partnership not only facilitates the creation of new investment opportunities but also advances the secure and efficient management of digital assets, paving the way for a more decentralized and democratized financial ecosystem.

The post PROPERTY TOKENIZATION – REVISITING THE WHY BEHIND DEMATERIALISATION appeared first on Dhiway.

Friday, 13. September 2024

Anonym

Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity 

For nearly two years, Anonyome Labs has co-maintained an open source project from Hyperledger called Aries-VCX. VCX is an important decentralized identity (DI) community project, which provides the backbone for other DI software products, such as our own Sudo Platform DI Edge Agent SDK for native mobile applications. In this article, we will explore the […] The post Aries VCX: Another Proof Poin

For nearly two years, Anonyome Labs has co-maintained an open source project from Hyperledger called Aries-VCX. VCX is an important decentralized identity (DI) community project, which provides the backbone for other DI software products, such as our own Sudo Platform DI Edge Agent SDK for native mobile applications. In this article, we will explore the details of this project, Anonyome’s contributions, and what’s next for this exciting project. 

What is Aries-VCX? 

Aries-VCX is a project under the Hyperledger Aries group. This group strives to provide complete toolkits for DI solutions and digital trust, including the ability to issue, store and present verifiable credentials with maximum privacy preservation, and establish confidential, ongoing communication channels for rich interactions. VCX sits alongside other popular projects such as Aries Cloud Agent Python (ACA-Py) and Credo (formerly Aries Framework JavaScript under Hyperledger). 

While these projects pursue a similar goal, they complement each other nicely. VCX is written primarily in Rust and targets both cloud and mobile native consumers. By comparison, Credo targets cloud and mobile JavaScript consumers, and ACA-Py targets only cloud consumers. Support for native mobile consumers was an essential goal when building the technology stack for Anonyome’s Edge Agent SDK and all other Sudo Platform SDKs, because providing native SDKs gives our consumers flexibility when integrating into their mobile applications and doesn’t limit them to JavaScript or React Native based environments. 

Further, VCX differs from other Aries projects in that it has historically focused on providing lower-level building blocks for DI SDKs and applications rather than batteries-included DI frameworks for consumers to pick up. We fully appreciate the low-level components because they give us the flexibility to design Anonyome’s Edge Agent SDK with an optimised internal engine and easy-to-use APIs that are in line with our Sudo Platform standards. However, VCX’s lower-level approach also presents a higher barrier to entry for other SDKs and applications to consume. 

Brief history of VCX 

VCX has been around since 2017 and is one of the first implementations of an Aries protocol-compliant library. Evernym created the original library, which was eventually moved into the Hyperledger Indy SDK project. This was to serve as a reference implementation for integrating with the Indy SDK for the Aries protocols. In 2020, the project was moved into a dedicated Hyperledger project by Absa Group, beginning a new era of development beyond the Indy SDK. 

VCX today provides a DI toolbox with a large suite of functionality that Anonyome and others in the industry use. The toolbox includes: 

DIDComm V1: VCX supports DID Communication V1, allowing end-to-end-encrypted messages to be encoded and decoded between DIDs.  Aries protocols: VCX provides tools for stepping through various agent-to-agent protocols defined by Aries. The protocols implemented in VCX allow the agent to engage with other agents to establish new secure connections, issue or receive credentials, present or verify a presentation of credentials, exchange text-based messages, and more. The latest list of supported protocols is here.  DID management: DIDs are foundational to DI, and VCX has invested time in creating a reliable and clean set of DID management tools for a range of different DID methods. This allows consumers to easily resolve, create and update DIDs involved in their DI interactions. This toolbox is designed with extensibility in mind, allowing new DID methods to be added in the future for further interoperability.   Anonyome’s journey with VCX 

In our pursuit of creating a highly optimized and secure Edge Agent SDK, we wanted to bring into our technology stack the latest cutting-edge DI and Aries libraries. However, given the history we’ve just outlined, VCX in 2022 was highly tethered to the Indy SDK—an SDK that was unfortunately heading towards deprecation at the time. As a strong believer in and adopter of VCX, we set out to join VCX and contribute a major pivot to the project: decoupling VCX from the Indy SDK. This was a major refactor that other Aries projects, such as ACA-Py, also had to work through around this time.  

The changes allowed consumers to plug in and use modern Indy SDK replacement components (Aries Askar, Indy VDR, Anoncreds-rs) instead. In practice, this means users benefit from receiving the latest features and optimizations from these libraries, as well as better interoperability (e.g., a larger range of Decentralized Identifier (DID) methods beyond Indy-based DID methods). 

Shortly after Anonyome’s contribution, in early 2023 we became a co-maintainer of the VCX project and we have worked alongside other individuals and companies such as Absa Group and Instnt. Since joining, Anonyome has contributed to a wide range of aspects in VCX, such as: 

Kickstarting a modern foreign function interface (FFI) wrapper using Mozilla’s UniFFI, allowing the Rust library to be consumed natively from Android and iOS  Implementing some of the latest Aries Interop Protocols (AIP2 credential issuance and presentation messages)  Contributing to the Aries Agent Test Harness on behalf of VCX, an effort that allows VCX to be benchmarked for interoperability with other Aries agents (such as ACA-Py and Credo)  Performing regular maintenance duties: contributing to architectural design decisions, codebase housekeeping, assisting the VCX community, and participating in regular community meetings.  What’s next for VCX? 

VCX has come a long way since its beginnings with Indy SDK: it’s advanced from an Indy reference implementation into a rich and extensible toolbox for DI operations, Aries, DIDs, DIDComm, AnonCreds, and so on. But VCX development is not slowing down, especially since the standards rapidly iterate and grow in the DI ecosystem. 

VCX is keeping its eye on what the community is asking for, and where the ecosystem is heading. A few notable items ahead include: 

DIDComm V2: Currently VCX is using DIDComm V1 for message transport and structuring in the Aries protocols it supports, but the next iteration of the standard—DIDComm V2—is now progressively rolling out into the Aries community. VCX plans to be a part of this transition.  VCX framework: As mentioned, VCX has historically been a lower-level “toolbox” for DI operations, which is great for flexibility but hinders broad adaption. Our co-maintainer and contributors at Instnt are now working on building a framework on top of VCX, an initiative to provide a more application-friendly interface (like ACA-Py and Credo).  DID toolbox enhancements: Since the move away from Indy, VCX has pursued supporting a wider range of DID methods from other blockchain and non-blockchain-based ecosystems, such as did:web and the latest did:peer specification. VCX will continue growing support for DID methods, building a rich and clean toolbox for “all things DIDs”. 

Anonyome is very excited for the future of VCX and we’re glad we were a part of the journey thus far as a co-maintainer. We’d like to give a huge thanks to the co-maintainers and contributors who have made VCX what it is today—open-source thrives most with a diverse community behind it. 

If you’d like to join the VCX efforts, or just hear more about what we’re doing, feel free to join our biweekly community meeting or reach out on Discord

The post Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity  appeared first on Anonyome Labs.


paray

Practical Steps for Advising on BOIR Compliance

When advising clients on filing FinCEN’s Beneficial Ownership Information (BOI) reporting obligations, professionals should offer clear, practical guidance to ensure compliance and mitigate potential risks.  It is obviously helpful to start out by educating small business clients on the fundamentals of BOIR filing:    – Who needs to file: Explain that most small corporations, LLCs, …
When advising clients on filing FinCEN’s Beneficial Ownership Information (BOI) reporting obligations, professionals should offer clear, practical guidance to ensure compliance and mitigate potential risks.  It is obviously helpful to start out by educating small business clients on the fundamentals of BOIR filing:    – Who needs to file: Explain that most small corporations, LLCs, … Continue reading Practical Steps for Advising on BOIR Compliance →

KuppingerCole

cidaas access management

by John Tolbert This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage identity access to complex IT infrastructures. A technical review of the cidaas access management platform is included.

by John Tolbert

This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage identity access to complex IT infrastructures. A technical review of the cidaas access management platform is included.

Decentralized Identity: Potential for Breakthrough Innovation

by Martin Kuppinger Decentralized Identity (DCI) has evolved over more than a decade and is reaching the tipping point for widespread adoption and triggering massive innovation in how businesses and governments interact with customers, consumers, employees, or citizens. From centralized identity siloes to decentralized identity wallets DCI, also referred to as SSI (Self-Sovereign Identity), i

by Martin Kuppinger

Decentralized Identity (DCI) has evolved over more than a decade and is reaching the tipping point for widespread adoption and triggering massive innovation in how businesses and governments interact with customers, consumers, employees, or citizens.

From centralized identity siloes to decentralized identity wallets

DCI, also referred to as SSI (Self-Sovereign Identity), is a concept that differentiates fundamentally from established models. Commonly, organizations manage identities of the individuals in their own systems, creating siloes of identities and causing individuals to register with many different parties. Everyone experiences this on an almost daily basis when using the Internet. While some identities such as the ones of LinkedIn, Facebook, Google, or Apple can be reused, they still are centralized and not ubiquitous.

In contrast, DCI leaves the identity and its attributes with the individual. Based on standards, that information can be flexibly exchanged with other parties. So called verifiable credentials (VCs) provide information for instance about the name, the email address, the postal address, the employer, the employment status, or any other information. The concept of DCI is open and does not limit what could be provided with VCs. This is essential, because this enables using DCI for any type of use case, especially because also things, devices, or organizations could (and will, over time) have their decentralized identities.

DCI builds on a concept of issuers that issue VCs, holders – commonly the individuals – that hold VCs, and verifiers that consume VCs. The VCs are stored by the individual in so-called wallets. Over time, the term wallet may turn out to be misleading, because we potentially will have way more information in the form of VCs in the wallet than we have cards in our wallets today. Also, the use cases will become much broader.

Decentralized identity: More than just verification, onboarding and authentication

DCI today is frequently seen as a means for having a verified identity, based on human-assisted or fully automated IDV (Identity Verification) processes, on hand that is reusable. This enables trusted interactions with other parties such as organizations or governmental agencies.

The VCs then provide additional data and can for instance simplify the onboarding process such as registering with an eCommerce site. Based on the verified identity, the secure wallet, and the ability to open that wallet, authentication processes can become simplified.

However, looking just at these aspects is only scratching the surface of the potential that DCI holds. The potential is much bigger. VCs can be used for process automation and optimization. Envision onboarding of externals to a project. This process can become fully automated based on the name, the employer, the employment status and some other information. Or envision applying for a loan at a bank, based on other VCs, ranging from the verified identity to the monthly salary statements, marital status, proof of existing real estate, and so on. The costly AML (Anti Money Laundering) and KYC (Know Your Customer) processes in banks would sink massively, as well as the cost for approving (or rejecting) loans. Process cost optimization is a massive potential of DCI.

But there is more. Consent could be managed by VCs that allow the use of certain information by defined parties for a defined purpose and limited time. People could share health data in a controlled manner as VCs. The potential is virtually infinite and allows for breakthrough innovation in the digital economy.

Breakthrough potential: Disruption in business that does not break IT

DCI can become disruptive to the business, with organizations that leverage the potential of DCI winning by delivering new, innovative services, but also optimizing their processes and thus cost. We expect that with the recent eIDAS 2.0 regulation, which amongst other changes mandates EU member states to provide DCI wallets, the EU DI wallets (EU Decentralized Identity) to every citizen and to adopt this technology for eGovernment use cases, there is a driver for significantly increasing the speed in adopting DCI approaches. These wallets are a foundation for implementing further DCI use cases.

Fortunately, disruption in business does not equal disruption in IT. DCI adds to what exists. When a customer is registered via DCI and purchases goods, this is still reflected by records in the ERP system of the organization. When someone is onboarded, there still might be an entry in an internal directory.

Just adding DCI to the forefront of the organization will not allow leveraging the full potential, though. Consuming VCs to make decisions, from access authorizations to process automation, requires changes in the backends. In many cases, this will be an evolutionary process.

With the immense potential of DCI, it is the latest time that organizations start evaluating that potential and think about the innovation that it can bring to their business or the way governments serve their citizens. This must involve everyone in the organization, not just the identity team.

As a guest of Ergon Informatik, Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will talk about this topic more in depth at the it-sa Expo & Congress in Nuremberg on October 23rd.


Metadium

CertiK Skynet

CertiK Skynet Dear Community, We are pleased to share the latest update on Metadium’s progress with CertiK Skynet. In our commitment to the continuous development and trust of the Metadium project, we prioritize enhancing security and transparency. As part of this effort, Metadium has recently completed a security audit and KYC certification with CertiK Skynet. What is CertiK Sk

CertiK Skynet

Dear Community,

We are pleased to share the latest update on Metadium’s progress with CertiK Skynet.

In our commitment to the continuous development and trust of the Metadium project, we prioritize enhancing security and transparency. As part of this effort, Metadium has recently completed a security audit and KYC certification with CertiK Skynet.

What is CertiK Skynet?

CertiK Skynet is a platform that monitors and evaluates the security and reliability of blockchain and cryptocurrency projects in real-time. It provides services related to security audits of smart contracts and blockchain systems. Skynet focuses on continuously monitoring each project’s smart contracts and detecting potential threats.

Smart Contract Audits: CertiK rigorously reviews and analyzes the code of smart contracts to identify vulnerabilities and weaknesses that malicious actors could exploit. This process ensures that blockchain projects are secure and trustworthy. Penetration Testing: The company conducts thorough penetration testing to simulate potential attacks, safeguarding blockchain systems from hacks and security breaches. Security Monitoring: CertiK offers ongoing monitoring of blockchain projects to identify and address potential threats in real time. Skynet: CertiK’s automated security and monitoring tool provides real-time insights, on-chain monitoring, and automated auditing.

Smart contracts are a core technology in cryptocurrency projects, essential to enhance project efficiency, transparency, and trustworthiness. Through this technology, projects can operate autonomously and offer users and investors a high level of security.

Key Achievements:

CertiK Security Score increased by 5.88 points. Security Score Rank rose by 513 positions. Obtained KYC certification badge. Key Highlights:

CertiK Skynet Audit: Metadium has confirmed the safety of its platform’s code and systems through a thorough security audit by CertiK Skynet. Twenty-nine items were approved and improved during this audit, and the code audit score increased by 23.68 points.

KYC Certification:

Additionally, Metadium has enhanced the transparency of its platform operations through CertiK Skynet’s KYC certification process. KYC certification is a critical procedure that verifies the project team’s identity and assesses compliance with anti-money laundering (AML) regulations. CertiK’s KYC service maintains the highest standards of data protection while providing rigorous scrutiny of the project team’s personal identity and background.

CertiK’s investigators validate cryptocurrency development teams and award a “KYC Badge” to those who successfully pass the due diligence process. This badge enhances the project team’s accountability and trustworthiness while reducing and mitigating risks of fraud and abuse. Metadium has obtained this badge, demonstrating its adherence to laws and regulations.

CertiK Skynet Score:

As a result of all these processes, Metadium’s CertiK Skynet rank and score have improved. This score reflects a comprehensive evaluation of Metadium’s security, stability, and public aspects, reaffirming the project’s technical excellence and reliability to the market.

The Metadium team is committed to continuing to build an even safer and more reliable platform. The audit and certification through CertiK Skynet are just the beginning, and we will consistently strive to maintain your trust.

Thank you for your continued support.

Metadium Team

메타디움 CertiK Skynet 업데이트 소식을 전해드립니다.

메타디움 프로젝트의 지속적인 발전과 신뢰를 위해, 우리는 보안과 투명성 강화를 최우선 과제로 삼고 있습니다. 이러한 노력의 일환으로 메타디움은 최근 CertiK Skynet에서 보안 감사 및 KYC 인증을 성공적으로 완료하였습니다.

CertiK Skynet이란?

CertiK Skynet은 블록체인 및 암호화폐 프로젝트의 보안 및 신뢰성을 실시간으로 모니터링하고 평가하는 플랫폼 입니다. 스마트 계약과 블록체인 시스템의 보안 감사와 관련된 서비스를 제공합니다. Skynet은 각 프로젝트의 스마트 계약을 지속적으로 모니터링하고 잠재적인 위협을 감지하는 데 중점을 둡니다.

스마트 계약 감사: CertiK는 스마트 계약의 코드를 엄격하게 검토하고 분석하여 악의적인 공격자들이 악용할 수 있는 취약점을 식별합니다. 이 과정은 블록체인 프로젝트의 보안성과 신뢰성을 보장합니다. 침투 테스트: 회사는 잠재적인 공격을 시뮬레이션하여 블록체인 시스템을 해킹과 보안 침해로부터 보호하는 철저한 침투 테스트를 수행합니다. 보안 모니터링: CertiK는 블록체인 프로젝트를 실시간으로 모니터링하여 잠재적인 위협을 식별하고 대응합니다. Skynet: CertiK의 자동화된 보안 및 모니터링 도구는 실시간 인사이트, 온체인 모니터링, 자동화된 감사를 제공합니다.

주요 성과

CertiK Security Score 5.88점 상승 Security Score Rank 513계단 상승 KYC 인증 배지 획득

주요 내용

CertiK Skynet Audit:

메타디움은 CertiK Skynet의 철저한 보안 감사(Audit)를 통해 플랫폼의 코드와 시스템의 안전성을 확인받았습니다.

이번 총 29개의 항목에 대해 Audit을 진행했으며, 코드 점수가 23.68점 상승했습니다.

KYC 인증

또한, 메타디움은 CertiK Skynet의 KYC 인증 절차를 통해 플랫폼 운영의 투명성을 높였습니다.

KYC 인증은 프로젝트 팀의 신원을 확인하고, 자금 세탁 방지(AML) 규정을 준수하는지를 평가하는 중요한 절차입니다.

CertiK의 KYC 서비스는 가장 높은 수준의 데이터 보호 표준을 유지하는 동시에 엄격한 심사 과정을 통해 프로젝트팀의 개인 신원 및 배경 검증을 제공합니다.

CertiK의 자체 조사관은 암호화폐 개발 팀을 검증하여 실사 과정을 성공적으로 통과한 팀에게 “KYC 배지”를 제공합니다. 이 배지는 프로젝트 팀의 책임성과 신뢰를 높이는 동시에 사기 및 남용 위험을 줄이고 완화합니다.
메타디움은 이번 인증을 통해 KYC 배지를 획득했고, 메타디움이 법규와 규정을 준수하고 있음을 입증하게 되었습니다.

CertiK Skynet Score

이 모든 과정의 결과로, 메타디움의 CertiK Skynet 점수 및 랭킹이 향상되었습니다.

이 점수는 메타디움 프로젝트의 보안성, 안정성, 공공성을 종합적으로 평가한 결과로, 메타디움의 기술적 우수성과 신뢰성을 시장에 다시 한번 입증한 것입니다.

저희 메타디움 팀은 앞으로도 더욱 안전하고 신뢰할 수 있는 플랫폼을 만들기 위해 최선을 다할 것입니다. CertiK Skynet을 통한 감사와 인증은 그 첫걸음일 뿐, 앞으로도 여러분의 신뢰를 저버리지 않도록 꾸준히 노력하겠습니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

CertiK Skynet was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 12. September 2024

KuppingerCole

The Security You Need: Seamlessly Integrating PAM and IGA for Ultimate Protection

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their securit

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their security posture.

Modern technology offers solutions to these challenges through unified identity platforms. These platforms enable organizations to manage security from on-premises to cloud environments with modular, integrated solutions across IGA, IAM, PAM, and Active Directory Management and Security. By leveraging API-first approaches and identity correlation systems, businesses can achieve seamless integration, reduce operational risks, and support agile just-in-time scenarios.

Paul Fisher, Lead Analyst at KuppingerCole, will discuss the latest trends in PAM and IGA integration, highlighting the importance of a unified approach to identity security. He will explore the challenges organizations face in implementing these systems and offer insights into overcoming common obstacles, ensuring compliance, and maintaining robust governance in an ever-changing threat landscape.

Jason Moody, Global Product Marketing Manager, PAM, and Bruce Esposito, Global Product Marketing Manager, IGA, both from One Identity, will showcase their Unified Identity Platform. They will demonstrate how this solution addresses identity sprawl, enhances business agility, and supports both internal and external users. The speakers will also highlight One Identity's approach to integrating PAM and IGA, emphasizing its flexibility and scalability.




Finicity

Nacha’s Preferred Partner offerings evolve to include open banking and account validation

As governor of the automated clearing house (ACH) Network that moves $80 trillion in funds electronically each year, U.S. payments industry association Nacha has been moving payments forward for 50… The post Nacha’s Preferred Partner offerings evolve to include open banking and account validation appeared first on Finicity.

As governor of the automated clearing house (ACH) Network that moves $80 trillion in funds electronically each year, U.S. payments industry association Nacha has been moving payments forward for 50 years. In recognition of the tremendous, data-driven changes shaping the industry in just the last few years, Nacha updated the categories for its Preferred Partner Program.

Nacha selects Preferred Partners, including Mastercard, whose payments technology offerings align with Nacha’s network advancement strategy. Mastercard Open Banking services are provided by Finicity, which has been a Nacha preferred partner in all partner solutions categories — previously defined as Compliance, Risk and Fraud Prevention, and ACH Experience — since 2020.

Going forward, Mastercard will continue to provide advanced, secure and trusted payment solutions as a Nacha Preferred Partner in three key areas: Risk and Fraud Prevention, as well as new categories Account Validation and Open Banking. These solutions are integral to the future of digital payments.

The power of consumer-permissioned data

Account-to-account (A2A) consumer bill payments and transfers totaled $9 trillion in 2023, and continue to grow at a 7% compound annual rate, according to Nacha, driven by consumers’ choice for fast and convenient payment options. Failed payments and fraudulent charges can be costly and take time to resolve. So it’s critically important to protect A2A payments with insights and analytics that keep risk and cost to a minimum.

Ensuring secure and successful digital payments starts with a robust account validation process to verify critical details like account type, ownership and balance information. These solutions not only help optimize payments, reduce risk and lower costs for fintechs and merchants, they enable the safe and seamless payment experiences that end users demand. Mastercard Open Banking for Payments solutions include:

Account Owner +: Verify identity by analyzing risk signals, insights and scores related to personal information, device details and IP addresses. Account Payment Details: Retrieves account and routing numbers and indicates real-time payment availability. Balances: Gathers insights from cleared and available balances and time stamps, with a dynamic recency setting. Payment Success Indicator: De-risks payments with predictive insights from a weighted, multifactor settlement risk score.

Mastercard’s advanced global network and decades of experience in risk and fraud prevention can help fintechs and merchants make smarter decisions in a fast-moving digital payments landscape. Ultimately, we strive to help our customers, partners and end users realize all the benefits of next-generation A2A payment technologies with the lowest possible risk.

To learn more about Mastercard Open Banking for Payments, click here.

The post Nacha’s Preferred Partner offerings evolve to include open banking and account validation appeared first on Finicity.


Spruce Systems

Meet the SpruceID Team: Parke Hunter

Parke, SpruceID’s marketing manager, combines marketing expertise and customer focus to help drive success.
Name: Parke Hunter
Team: Marketing
Based in: Denver, Colorado About Parke

After getting my marketing degree from Virginia Tech (Go Hokies!), I landed my first job selling commercial insurance at GEICO—fun fact: I got to be the GEICO Gecko for a day.

I then transitioned into working in software implementation and customer success at a food service tech company. Still wanting to pursue a career in marketing while being able to continue working closely with the product development team and customers, I found my love for product marketing. I went on to work as a product marketing manager for a range of products (from data analytics software tools to Atlassian’s app development platform) for five years at Alteryx, Sisense, and Atlassian.

I started at SpruceID last year and have loved every minute of it! It's exciting to see how the company has grown throughout my time here, and I have had the opportunity to experiment and try my hand at other areas of marketing that I may not have been as familiar with before.

Parke as GEICO Gecko Can you tell us about your role at SpruceID?

At SpruceID, my role spans managing our content funnel, social media, and customer highlights/case studies and helping support certain events such as hackathons, business development, and website updates. We are also gearing up to build out our product marketing function, which I am looking forward to.

What do you find most rewarding about your job?

What’s most rewarding about my job is that I feel that my work really impacts our company and mission. I feel driven and motivated by how our products help people.

Also, I may be biased, but our team is the best. SpruceID is made up of some of the smartest, kindest, and most fun individuals I have ever met. They are supportive, encouraging, and come together to work as a team and achieve a goal in a way I have never seen before.

What is the most important quality for someone in your role to have?

I think that the most important quality in a marketer is curiosity. 

Curiosity for understanding customers and personas, as well as the industry you're in, spotting trends in data, problem-solving, and adapting to change in case business needs shift and you have to learn new skills.

What has been the most memorable moment for you at SpruceID so far?

There have been so many it’s hard to choose!! One certainly stands out, though. At our fall 2023 offsite in Dublin, I was plucked from the crowd in an Irish pub to do an Irish jig on stage in front of hundreds of locals (and the entire company who I had just met in person for the first time!).

The moment we launched the California mDL was also a special and memorable moment for me.

How do you define success in your role, and how do you measure it?

There are so many ways our marketing team defines and measures success, from top to bottom of funnel.

We measure everything from brand awareness to lead generation, revenue growth, content engagement metrics, customer feedback, and awards/recognition, just to name a few. In marketing, we are also constantly evaluating the competitive landscape and understanding where we fit into it. As SpruceID grows, I know we’ll track more success metrics.

I am data and metrics-driven, and I define success in my role by the impact my work has on driving measurable results. Success to me means continuously learning, improving, and contributing to SpruceID's overall growth and strategic goals.

Fun Facts

What do you enjoy doing in your free time? In my free time, you can find me road-tripping, hiking or snowshoeing as one does in Colorado, watching reality TV, studying (I am currently getting my master's degree online), and hanging out with friends! I recently started Denver’s first “Food Critics Club” with a group of friends. We set out to taste-test a certain type of food (e.g., all of the croissants or empanadas in Denver) and have a picnic to try them all and rate them. That has been a blast!

If you could be any tree, what tree would you be and why? I would be a palm tree! Calm, resilient, and adaptable. Palm trees seem relaxed, go with the flow, and thrive in the sun (like me), but they are also much tougher than they seem and can weather wind and storms.

Interested in joining our team? Check out our open roles and apply online!

Join Our Team

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Nov 19, 2024: Identity Security and Management – Why IGA Alone May Not Be Enough

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility, security and control.
Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility, security and control.

Ocean Protocol

DF106 Completes and DF107 Launches

Predictoor DF106 rewards available. DF107 runs Sept 12 — Sept 19, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 106 (DF106) has completed. DF107 is live today, Sept 12. It concludes on September 19. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&n
Predictoor DF106 rewards available. DF107 runs Sept 12 — Sept 19, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 106 (DF106) has completed.

DF107 is live today, Sept 12. It concludes on September 19. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF107 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF107

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF106 Completes and DF107 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

KYC (Know Your Customer) Checklist: Simplified

Achieve KYC compliance with our comprehensive checklist, including documents, best practices, and identity verification tips.

Know Your Customer (KYC) programs are a way for financial institutions to verify the identity of their clients. Not only does it help ensure compliance with government regulations, but KYC is also an important step in preventing fraud and other illegal financial activities. Without it, businesses in the financial sector could be subject to government penalties and a loss of customer trust. In this article, we’ll take a deeper look at KYC best practices and run through an easy-to-understand compliance checklist.

Wednesday, 11. September 2024

Microsoft Entra (Azure AD) Blog

Omdia’s perspective on Microsoft’s SSE solution

In July, we announced the general availability of the Microsoft Entra Suite and Microsoft’s Security Service Edge (SSE) solution which includes Microsoft Entra Internet Access and Microsoft Entra Private Access.     Microsoft’s vision for SSE   Microsoft’s SSE solution aims to revolutionize the way organizations secure access to any cloud or on-premises applications. It unif

In July, we announced the general availability of the Microsoft Entra Suite and Microsoft’s Security Service Edge (SSE) solution which includes Microsoft Entra Internet Access and Microsoft Entra Private Access.  

 

Microsoft’s vision for SSE

 

Microsoft’s SSE solution aims to revolutionize the way organizations secure access to any cloud or on-premises applications. It unifies identity and network access through Conditional Access, the Zero Trust policy engine, helping to eliminate security loopholes and bolster your organization’s security stance against threats. Delivered from one of the largest global private networks, the solution ensures a fast and consistent hybrid work experience. With flexible deployment options across other SSE and networking solutions, you can choose to route specific traffic profiles through Microsoft’s SSE solution.

 

Omdia's perspective

 

According to Omdia, a leading research and consulting firm, Microsoft’s entry into the SASE/SSE space is poised to disrupt the market. Omdia highlights that Microsoft’s focus is on an identity-centric SASE framework, which helps consolidate technologies from different vendors by extending identity controls to your network and enhancing team collaboration. A key strength for Microsoft, according to Omdia, is its ability to introduce Microsoft Entra Internet Access and Microsoft Entra Private Access seamlessly into existing identity management conversations—a strength that could lead to broader adoption of network access services as part of the same platform.

 

Conclusion

 

As you navigate the complexities of securing network access, Microsoft’s Security Service Edge solution helps you transform your security posture and improve user experience. It simplifies collaboration between identity and network security teams by consolidating access policies across identities, endpoints and network, all managed in a single portal - the Microsoft Entra admin center. Microsoft’s SSE solution provides a new pathway to implement zero trust access controls more effectively, enabling your organization to enhance its security posture while leveraging existing Microsoft investments.

 

To learn more about Omdia’s perspective on Microsoft’s SSE solution, read Omdia’s report, Microsoft announces general availability of its SASE/SSE offering.

 

Learn more and get started 

 

Stay tuned for more Security Service Edge blogs. For a deeper dive into Microsoft Entra Internet access and Microsoft Entra Private Access, watch our recent Tech Accelerator product deep dives.

 

To get started, contact a Microsoft sales representative, begin a trial, and explore Microsoft Entra Internet Access and Microsoft Entra Private Access general availability. Share your feedback to help us make this solution even better. 

 

Nupur Goyal, Director, Identity and Network Access Product Marketing 

 

 

Read more on this topic

Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available  Microsoft’s Security Service Edge products now in General Availability  Microsoft Entra Internet Access Microsoft Entra Private Access

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community 

 


auth0

All You Need To Know About Passkeys at Auth0!

There are so many resources out there about passkeys and each vendor has its own implementation of the standard. Let’s answer some of your frequently asked questions about passkeys at Auth0!
There are so many resources out there about passkeys and each vendor has its own implementation of the standard. Let’s answer some of your frequently asked questions about passkeys at Auth0!

Indicio

Biometric digital identity travel and hospitality Prism report

Prism The post Biometric digital identity travel and hospitality Prism report appeared first on Indicio.

Ontology

Ontology Weekly Report: September 3rd — 9th, 2024

Ontology Weekly Report: September 3rd — 9th, 2024 Ontology At Ontology, we’re continuing to engage closely with our community, ensuring consistent communication and collaboration. Here’s what’s been happening: Community Call and Privacy Hour Our regular Community Call and Privacy Hour took place as planned, fostering open conversations on decentralized identity and privacy. If you missed
Ontology Weekly Report: September 3rd — 9th, 2024 Ontology

At Ontology, we’re continuing to engage closely with our community, ensuring consistent communication and collaboration. Here’s what’s been happening:

Community Call and Privacy Hour
Our regular Community Call and Privacy Hour took place as planned, fostering open conversations on decentralized identity and privacy. If you missed it, catch up with the recording here. ONTO Wallet New Node Registration Tutorial
Stay on top of your game! We’ve released a new video tutorial on how to register a node, making it easier than ever to get started. Joining the Exocore Ecosystem
ONTO Wallet is now a part of the Exocore ecosystem, reinforcing our commitment to providing top-tier decentralized solutions. Orange Protocol ENS on Base Campaign
We’re excited to celebrate ENS’s expansion to the Base chain, a major step toward bringing billions of people onchain! You can now mint and manage ENS subnames directly on Base with lower gas fees. In collaboration with the artist MEK, we’ve unveiled artwork capturing this milestone. This campaign boosts the integration of ENS as a digital identity in decentralized applications. Don’t miss out — join the campaign today! Community

Engagement is at the heart of what we do. This week, we kept the momentum going with interactive sessions and fun activities:

Wordle Game
We hosted our first-ever Wordle game during this week’s discussions, and it was a hit! Due to its success, it will now become a monthly feature. Special thanks to our hosts, SasenDish and Iamfurst, for their energy! Telegram Community Discussion
The Ontology French Telegram channel hosted a session on the history of crypto, focusing on the Mt. Gox collapse. Special thanks to Mathus95 for his valuable insights. Publications

Check out our latest articles for deep dives into critical Web3 issues:

Decentralized Identity and Reputation: Balancing Freedom and Regulation
Discover how decentralized identity systems can protect privacy while addressing the need for regulation. Real-world examples like Silk Road and Tornado Cash illustrate the challenges and solutions. Read more.
With transparency and engagement, we could create a system that balances freedom with responsibility.
Mark Cuban’s Challenge to Trump Supporters
This article highlights Mark Cuban’s comments and their relevance to the echo chambers in venture capital. Read here.
As we continue to develop Web3 technologies, let’s push for a world where investor reputations and venture capital histories are public, verifiable, and untouchable by spin.
Stay Connected

Stay engaged and informed by following us on our social media channels. Your participation is essential as we continue to build a more secure and inclusive digital world together.

Ontology Website / ONTO Website / OWallet (GitHub) Twitter / Reddit / Facebook / LinkedInYouTube Telegram Announcements / Telegram EnglishDiscord

Ontology Weekly Report: September 3rd — 9th, 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Protecting Cloud Environments at Scale

by Dominik Sowinski In today’s cloud-driven world, securing digital infrastructure is more challenging than ever. With advanced persistent threats (APTs) on the rise and global conflicts intensifying cyber risks, adapting cloud security strategies is essential. At cyberevolution 2024, Dominik Sowinski, Cybersecurity Architect at Siemens AG, will explore how organizations can fortify their cloud e

by Dominik Sowinski

In today’s cloud-driven world, securing digital infrastructure is more challenging than ever. With advanced persistent threats (APTs) on the rise and global conflicts intensifying cyber risks, adapting cloud security strategies is essential. At cyberevolution 2024, Dominik Sowinski, Cybersecurity Architect at Siemens AG, will explore how organizations can fortify their cloud environments against emerging threats.

Dominik’s talk will cover the latest attack trends and offer strategies for protecting cloud infrastructures at scale. He’ll delve into how AI, automation, and secure architecture can help mitigate risks, while highlighting best practices for building a resilient cloud security framework.

For professionals tasked with safeguarding their organization's cloud operations, this session is a must. Don’t miss out on the opportunity to stay ahead of evolving threats in today’s dynamic cybersecurity landscape.


Metadium

Explorer Update

Dear Community, We are excited to announce that the Metadium Explorer website has been updated. A new feature has been added to the Token Transfer menu, limiting data beyond the offset range. This will allow you to access data more reliably, improving the overall user experience. Metadium will continue to prioritize your convenience and security as we make ongoing improvements. Thank you. 안녕하세

Dear Community,

We are excited to announce that the Metadium Explorer website has been updated. A new feature has been added to the Token Transfer menu, limiting data beyond the offset range. This will allow you to access data more reliably, improving the overall user experience.

Metadium will continue to prioritize your convenience and security as we make ongoing improvements.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

최근 메타디움 익스플로러 웹사이트에 업데이트가 진행되었습니다. Token Transfer 메뉴에서 화면에서 표시되는 오프셋 데이터 범주를 넘어 데이터를 조회하는 것을 제한하는 기능이 추가되었습니다. 이로 인해 데이터를 더 안정적으로 확인하실 수 있으며, 더욱 원활한 사용자 경험이 가능해졌습니다.

앞으로도 메타디움은 여러분의 편의성과 보안을 최우선으로 생각하며 지속적인 개선을 이어나가겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Explorer Update was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What is Dynamic Access Control? Ties to Authorization

Benefits of dynamic access control and how it works, with a focus on its role in financial services and key features for improved access management

Introduced as part of Windows Server 2012, Dynamic Access Control (DAC) enables administrators to regulate network access based on a number of dynamic variables. For instance, dynamic access control can grant a user access to network resources while on a private internet connection, but restrict their access if they’re on a public wi-fi network. This makes dynamic access control well-suited to meeting the demands of modern access management. Financial service providers can use dynamic access control to enhance their data governance in a way that doesn’t interfere with the user experience.


BlueSky

Share video on Bluesky!

Bluesky now has video!

After much anticipation, you can now share videos on Bluesky! Let’s dive right into the quick facts.

Quick facts Each post can contain one video. Videos can be up to 60 seconds long. Bluesky currently supports .mp4, .mpeg, .webm, and .mov video files. By default, videos will auto-play. You can turn off auto-play in Settings.

Update to version 1.91 of the mobile app or refresh desktop to begin watching video on Bluesky. We're rolling out the ability to post video gradually to ensure a smooth experience.

Some more details You can attach subtitles to your video. Currently, you can upload 25 videos / 10 GB of video per day. We may tweak this limit.

At Bluesky, the product team works hand-in-hand with Trust & Safety to develop new features. Here’s the safety tooling available with video:

You must verify your email before you can upload a video. This is one step to decrease spam and abuse with video. You can apply labels to your own videos, for example, for adult content. You can submit reports to Bluesky’s moderation team for posts with video. These posts may be labeled or taken down. Video that contains illegal content will be purged from our infrastructure. For users that repeatedly violate our community guidelines with video content, Bluesky’s moderation team may remove your ability to upload videos. Every video is processed via Hive and Thorn to scan for content that requires a content warning or content that should be taken down (e.g. illegal material like CSAM). When you delete a post that contains video, the video will be deleted immediately. Shortly afterwards, the data will be entirely purged from Bluesky infrastructure as well.

Sports, pop culture, politics, breaking news, and so much more just got a lot more exciting on Bluesky! We’re so excited for our community to continue to grow. See you on Bluesky!

Tuesday, 10. September 2024

KuppingerCole

A Glimpse into the 2024 IGA Market Landscape

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governanc

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance features. The level of identity and access intelligence has become a key differentiator between IGA product solutions. Automation is still the key trend in IGA to reduce management workload by automating tasks, providing recommendations, and improving operational efficiency.

Nitish Deshpande, Research Analyst at KuppingerCole, will discuss the current state of the IGA market, the core capabilities required by IGA solutions as well as the business activities supported by IGA solutions. He will describe our Leadership Compass methodology and process and show some high-level results from the report which was published last month.




Unlocking Success: Praxisorientiertes Rollenmanagement und Berechtigungskonzeptverwaltung im Fokus

IT-Fachleute stehen vor der Herausforderung, komplexe Rollenstrukturen und Berechtigungskonzepte effizient zu verwalten. Die Vielzahl von Einzelrechten und Rollenobjekten erschwert nicht nur die Erstellung, sondern auch die kontinuierliche Anpassung an sich wandelnde Anforderungen im Identitäts- und Zugriffsmanagement (IAM). Zudem müssen Compliance-Anforderungen erfüllt und Änderungen nachvollzieh

IT-Fachleute stehen vor der Herausforderung, komplexe Rollenstrukturen und Berechtigungskonzepte effizient zu verwalten. Die Vielzahl von Einzelrechten und Rollenobjekten erschwert nicht nur die Erstellung, sondern auch die kontinuierliche Anpassung an sich wandelnde Anforderungen im Identitäts- und Zugriffsmanagement (IAM). Zudem müssen Compliance-Anforderungen erfüllt und Änderungen nachvollziehbar dokumentiert werden. Mithilfe moderner Technologien wie zentralisierte Plattformen, Visual Analytics und Workflow-Engines, können die Herausforderungen des Rollenmanagements und der Berechtigungskonzeptverwaltung effektiv angegangen werden.

Schließen Sie sich den IAM-Experten von KuppingerCole Analysts und Nexis an, wie sie die Komplexität der Rollenstruktur, Compliance-Anforderungen und die Notwendigkeit der Nachvollziehbarkeit von Änderungen im IAM bedeutende Herausforderungen darstellen.

Matthias Reinwarth, der Director Practice IAM bei KuppingerCole Analysts, wird die steigende Notwendigkeit eines übergreifenden und wohladministrierten Rollenkonzeptes im Überblick betrachten. Außerdem wird er die besondere Notwendigkeit mit Blick auf die Erfüllungen rechtlicher und regulatorischer Anforderungen darlegen.

Alexander Puchta, Head of Professional Services bei der Nexis GmbH erklärt wie durch standardisierte Ansätze und Integrationen Kunden in die Lage versetzt werden, Best Practices umzusetzen und Compliance-Anforderungen zu erfüllen. Praxisbeispiele verdeutlichen die Anwendbarkeit dieser Lösungen.




Analyst's View: Passwordless Authentication for Enterprises

by Alejandro Leal Driven by the security risks and inconvenience associated with passwords, organizations are increasingly moving towards eliminating them altogether. Passwordless authentication solutions have emerged as a compelling alternative, offering enhanced security features and improved user convenience compared to traditional methods. Although passwordless options have been around for a w

by Alejandro Leal

Driven by the security risks and inconvenience associated with passwords, organizations are increasingly moving towards eliminating them altogether. Passwordless authentication solutions have emerged as a compelling alternative, offering enhanced security features and improved user convenience compared to traditional methods. Although passwordless options have been around for a while, some recent solutions are gaining traction with enterprises and even consumer-facing businesses.

1Kosmos BlockID

Navigating the Complexities of Modern Customer Identity Verification

In an era where identity theft and fraud are rampant, understanding the complexities of customer identity verification is crucial for businesses, especially in the financial sector. This involves meticulous Know Your Customer processes, safeguarding sensitive customer data, and adhering to global regulations to prevent fraudulent activities. Technological advancements such as AI, blockchain, and b

In an era where identity theft and fraud are rampant, understanding the complexities of customer identity verification is crucial for businesses, especially in the financial sector. This involves meticulous Know Your Customer processes, safeguarding sensitive customer data, and adhering to global regulations to prevent fraudulent activities. Technological advancements such as AI, blockchain, and biometrics have revolutionized these processes, ensuring they are more secure and user-friendly.

Understanding KYC (Know Your Customer)

Know Your Customer, commonly called KYC, is a pivotal component of customer identity verification. KYC is a process where businesses verify the identity of their clients and verify the identity documents of customers by ensuring that they are genuine and assessing the potential risks associated with maintaining a business relationship with them. Businesses, particularly in the financial sector, employ KYC procedures to comply with global regulations and prevent fraudulent activities such as money laundering and other identity fraud and theft.
The KYC process includes various stages, such as customer identification documents, customer due diligence, and ongoing monitoring of a customer’s age and transactions. It involves collecting, verifying, and maintaining detailed customer information, including personal details, contact information, and document verification. As a result, KYC helps in creating a secure business environment, fostering trust among clients and businesses.

Data Privacy and Protection

In customer identity verification, data privacy and protection of sensitive information are significant. Safeguarding customer data against unauthorized access and potential breaches is indispensable for maintaining customer trust and regulatory compliance. Businesses must establish robust data protection mechanisms that ensure customer data is stored, processed, and transmitted securely.
Data protection goes beyond the confines of technological safeguards. It encompasses legal and procedural measures, including consent management, data minimization, and adherence to global data protection regulations. In essence, protecting customer data is not merely a technical requirement but a comprehensive approach that integrates technology, legal compliance, and ethical considerations in handling a customer’s identity information.

Verification Process and User Experience

The verification process is a critical juncture where customer experience and security converge. An effective verification method requires customers to ensure the process is streamlined, user-friendly, and secure, balancing stringent security measures and a seamless user experience. Businesses must design intuitive online verification processes, minimizing customer effort and reducing the abandonment rate.
An optimized customer verification process incorporates multiple verification methods, such as document verification, biometric authentication, and two-factor authentication, to ensure compliance and enhance security. Furthermore, it’s imperative to ensure that the customer verification process is agile, adapting to evolving customer needs and emerging security threats. Thus, fostering a verification process that encapsulates user-centricity and security is instrumental in enhancing customer satisfaction and trust.

How Do You Verify Customer Identity? Utilizing AI and ML in Verification

Artificial Intelligence (AI) and Machine Learning (ML) are transformative technologies reshaping the landscape of customer identity verification. AI and ML algorithms can analyze vast datasets, identify patterns, and facilitate real-time decision-making in the identity verification process. These technologies enable automated document verification, both facial recognition and voice recognition, and anomaly detection, enhancing the accuracy and efficiency of identity verification.
By harnessing the power of AI and ML, businesses and financial institutions can automate repetitive tasks, reduce human error, and expedite the verification process of credit information. It allows for the continuous improvement of verification procedures as the algorithms learn and adapt to new patterns and threats, ensuring the verification process remains robust against evolving fraudulent tactics.

Blockchain for Secure Data Storage

Blockchain technology is emerging as a formidable force in securing customer data and enhancing the integrity of identity verification processes. Blockchain allows for the creation of decentralized and immutable ledgers where customer data can be stored securely, mitigating the risks associated with centralized data storage, such as data breaches and unauthorized access.
In a blockchain-based identity verification system, a customer’s identity data is encrypted and stored decentralized, ensuring it is resilient against tampering and unauthorized access. This technology fosters enhanced data integrity and trust, as customers can exercise greater control over their data, and businesses can ensure that the data utilized in the verification and authentication process to verify customers is accurate and unaltered.

Biometrics and Advanced Verification Methods

Biometrics have cemented their place as a cornerstone in advanced identity verification methods. Biometric verification encompasses various modalities of ID verification, such as fingerprint recognition, facial recognition, and voice authentication. These methods leverage individuals’ unique biological and physical characteristics, providing high security and accuracy in identity and verification services.
Employing biometrics in the verification process enhances the user experience by enabling quick and effortless verification of false online identities. Moreover, it bolsters security by ensuring the verified person’s identity corresponds to a live individual, mitigating the risks associated with identity theft and spoofing stolen identities. As biometric technology continues to evolve, it is poised to play an increasingly pivotal role in shaping secure and user-friendly identity verification processes.

Legal and Compliance Aspects
Global Regulatory Framework

Navigating the global regulatory landscape is indispensable in customer identity verification. International regulations and guidelines govern the processes and protocols for verifying customer identities. These regulatory frameworks aim to safeguard customer data, prevent fraudulent activities, and promote a secure digital ecosystem. Adhering to these regulations is paramount for businesses to maintain operational legitimacy and foster customer trust.
These global regulations often mandate stringent KYC (Know Your Customer) verification procedures, Anti-Money Laundering (AML) policies, and robust data protection measures. They necessitate continuous compliance, necessitating businesses to stay abreast of regulatory updates and dynamically align their verification processes to meet evolving compliance standards.

GDPR, CCPA, and Other Data Protection Laws

Prominent data protection regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are pivotal in shaping customer identity verification processes. These regulations advocate for stringent data protection measures, consent management, and enhanced user control over personal data. Compliance with these laws is imperative to safeguard user data and uphold organizational credibility and brand reputation.
These regulations entail specific provisions regarding collecting, storing, and processing personal data during customer verification. They advocate for data minimization, purpose limitation, and enhanced security measures to prevent unauthorized access and breaches of personal identification. Therefore, understanding and incorporating these legal provisions are crucial for businesses to foster lawful and secure identity verification processes.

Challenges and Solutions in Customer Identity Verification
Balancing Security and User-Friendliness

Creating a verification process that is both secure and user-friendly is a challenge. A robust verification process must ensure that security is maintained, but it should also avoid creating cumbersome processes that may deter users. Simplifying and streamlining the verification process while maintaining high-security standards is crucial for enhancing user satisfaction and trust.
Employing intuitive user interfaces, minimizing the number of required user actions, and using knowledge-based authentication utilizing technologies like biometrics can aid in achieving this balance. Adaptive authentication, which adjusts the level of required verification based on the associated risk, is another approach that can optimize the user experience without compromising security.

Dealing with Fraud and Identity Theft

Fraud and identity theft remain pervasive threats in today’s digital age domain. Crafting verification processes that can robustly counteract these threats is crucial. Techniques such as employing multi-factor authentication, machine learning to detect unnatural patterns, fraud prevention, and continuously updating security protocols can enhance resilience against these challenges.
Cultivating user awareness about potential threats and safe practices is vital. Education and clear communication can empower users to act as a robust first line of defense, recognizing and averting potential security threats before they manifest into breaches.

Future-Proofing Verification Processes

Ensuring that verification processes remain relevant and effective in evolving technological landscapes and emerging threats is essential. Future-proofing involves cultivating a flexible and adaptive verification strategy that swiftly incorporates new technologies, addresses emerging threats, and meets changing regulatory requirements.
Continuous learning, proactive adaptation of new technologies, and fostering a security-centric organizational culture are critical facets of future-proofing verification processes. It involves technological adaptability and strategic foresight to anticipate future trends and challenges, ensuring sustained relevance and effectiveness.

Automate Your Customer Verification Process with 1Kosmos

1Kosmos integrates with the pivotal aspects of customer identity verification, modernizing and securing the customer onboarding process. It revolutionizes KYC (Know Your Customer) by offering self-service identity verification, ensuring customers are authenticated with over 99% accuracy.
1Kosmos ensures a robust and unbiased verification process by utilizing live facial biometrics matched with government-issued credentials. Moreover, it empowers customers with a digital wallet, allowing them to securely transact and share Personally Identifiable Information (PII), enhancing user experience and trust.
Our platform’s emphasis on privacy by design aligns with the global emphasis on data protection. It puts users in complete control of their PII, ensuring enhanced security and compliance with regulations such as GDPR and CCPA.
1Kosmos’ innovative approach, combining biometrics and blockchain technology, enhances the security and efficiency of the customer identity verification process and fosters a user-centric approach, balancing stringent security measures with a seamless user experience.
Beyond refining customer identity verification, 1Kosmos also incorporates added security features like:
1. Biometric-based Authentication: We push biometrics and authentication into a new “who you are” paradigm. 1Kosmos uses biometrics to identify individuals, not devices, through credential triangulation and identity verification.
2. Identity Proofing: 1Kosmos provides tamper evident and trustworthy digital verification of identity – anywhere, anytime and on any device with over 99% accuracy.
3. Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture, and the encrypted data is only accessible by the user.
4. Distributed Ledger: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target.
5. Interoperability: 1Kosmos can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK.
6. Industry Certifications: Certified-to and exceeds requirements of NIST 800-63-3, FIDO2, UK DIATF and iBeta Pad-2 specifications.

To learn more about the 1Kosmos solution, visit the platform capabilities and feature comparison pages of our website.

The post Navigating the Complexities of Modern Customer Identity Verification appeared first on 1Kosmos.


KuppingerCole

KuppingerCole Cybersecurity Council Reflects on the CrowdStrike Incident: Lessons and Future Directions

by Berthold Kerl On September 4, 2024, KuppingerCole’s Cybersecurity Council convened for its third meeting of the year. This council, composed of Chief Information Security Officers (CISOs) from some of Europe’s largest organizations, provides a platform for discussing pressing cybersecurity challenges. This session focused on the July 2024 CrowdStrike incident, which caused widespread disruptio

by Berthold Kerl

On September 4, 2024, KuppingerCole’s Cybersecurity Council convened for its third meeting of the year. This council, composed of Chief Information Security Officers (CISOs) from some of Europe’s largest organizations, provides a platform for discussing pressing cybersecurity challenges. This session focused on the July 2024 CrowdStrike incident, which caused widespread disruption to Windows systems globally, and provided members the opportunity to share their lessons learned and proposed future actions.

The incident, caused by a faulty kernel-level driver, resulted in the crash of around 8 million machines worldwide, particularly affecting systems using BitLocker encryption. John Tolbert, KuppingerCole’s lead analyst, opened the discussion with an analysis of the event, pointing out that insufficient pre-deployment testing and the absence of a phased rollout were key factors in the incident’s scale. Tolbert also presented findings from his recent research into Endpoint Protection, Detection, and Response (EPDR) tools, highlighting the growing complexity and risk that accompanies widespread reliance on these solutions.

The attending CISOs, representing a variety of industries from banking to energy and retail, provided invaluable feedback on how their organizations dealt with the fallout from the CrowdStrike incident. Their experiences offered a wide range of perspectives: from those who directly used CrowdStrike to those impacted by the vulnerabilities of suppliers who relied on it. A key theme that emerged was the importance of improving testing procedures, ensuring stronger controls over software updates, and reinforcing supply chain security practices.

Across the board, CISOs emphasized the importance of Business Continuity Management (BCM). One organization reported that despite having thousands of systems down, their BCM efforts ensured a rapid recovery, with 95% of systems restored within 48 hours. Others, however, encountered significant operational downtime, particularly in sectors reliant on point-of-sale systems. For these organizations, recovery was hampered by complex dependencies on both internal and third-party systems.

Another key insight revolved around insurance and liability issues. CISOs debated the challenges of pursuing insurance claims in incidents where the root cause stems from software vendors rather than cyberattacks. Many organizations are now considering adding technical insurance to their cyber policies, as existing coverages did not account for software-induced outages.

One of the more nuanced discussions concerned the merits of multi-vendor EPDR strategies. While employing multiple security tools may reduce dependence on a single vendor, the increased complexity of managing and integrating different solutions often brings its own risks. Several members expressed concern over this approach, with one noting that a multi-EPDR strategy could cause operational inefficiencies that outweigh the potential benefits.

The session concluded with a focus on key takeaways:

Better Testing and Controlled Rollouts: Vendors must implement more stringent testing protocols and provide customers with better control over update timings to avoid global disruptions. Supply Chain Security: Organizations need to reassess their vendor management strategies, ensuring that service-level agreements (SLAs) clearly define responsibilities during incidents. Incident Communication: Timely and transparent communication with internal teams and external partners is critical in managing the fallout from large-scale incidents like CrowdStrike’s.

The KuppingerCole Cybersecurity Council continues to serve as an essential forum for CISOs to exchange insights and best practices. The next in-person meeting will take place during the cyberevolution 2024 conference, scheduled for December 3-5 in Frankfurt, where members will further explore cutting-edge cybersecurity strategies and enjoy networking opportunities.

This lively session offered valuable insights for council members and showcased the ongoing relevance of collaborative efforts in the cybersecurity space. Through these discussions, the council can drive industry-wide improvements in how security incidents are managed, both for member organizations and the broader public.

Next Meeting: December 3-5, 2024, cyberevolution, Frankfurt.


Indicio

From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management

The post From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management appeared first on Indicio.

By: Helen Garneau

In today’s digital world, identity is at the core of how individuals interact with online services. From accessing email to making online purchases, proving who you are is fundamental.

There are two methods for managing online identities, federated identity and decentralized identity —one legacy, one new — and each takes a different approach to where personal data is stored in order to authenticate an identity. Federated identity, which has dominated identity management for years, relies on centralized data management: personal data is stored in a database and checked against a login and password from a user account, whereas decentralized allows people, organizations, and things to hold their own personal data, and the source and integrity of this data is cryptographically authenticated for identity verification.

We’ll explain this in more detail in a moment, but this distinction — centralized vs decentralized — has profound implications for data privacy and security, and user experience.

Federated Identity: A Step Beyond Centralized Identity

Federated identity systems improve upon traditional centralized digital identity by allowing a single sign-on (SSO) across multiple platforms. Instead of creating separate accounts for each service, users can log in once using a trusted identity provider (IdP) like Google, Facebook, or Microsoft, and access various services. This system offers convenience for both users and service providers, reducing the friction of managing multiple identities.

Federated identity providers get their information directly from users during account creation or from external sources like social media, public records, and other databases. In many cases, businesses rely on these providers to authenticate users, paying for verification services or receiving data in exchange for marketing insights. While this model offers convenience, it has significant drawbacks.

The Drawbacks of Federated Identity

Centralized Control: Even though federated identity reduces the need for multiple login credentials, it still relies on centralized identity providers. These providers act as gatekeepers to online services, standing in the way of an end user and the service they are accessing. This creates a system where a few large enterprises control a vast number of digital interactions. Lack of Privacy: Federated identity providers typically gather extensive amounts of user data, which is then monetized. Users may not be aware of how much data is being shared across services or sold to third parties, leading to privacy concerns. As more services link to federated identities, the amount of shared data can grow exponentially. Single Points of Failure: The reliance on one or two major identity providers can also introduce risk. If a federated identity provider goes offline, or if an account is locked or hacked, users lose access to all associated services. This concentration of control makes federated systems prone to major disruptions when something goes wrong. Data Breaches: Federated systems, though more distributed than centralized identity models, still centralize sensitive data within the hands of a few large corporations. As history has shown, these providers are frequent targets for hackers, making them vulnerable to large-scale breaches that compromise millions of users at once.

Decentralized Identity: A User-Centric Solution with Verifiable Credentials

Decentralized identity, flips the traditional centralized model on its head. Instead of relying on centralized authorities to manage identity collected from third-parties, decentralized identity systems give individuals control over their own data.

How does this work? It’s a two-step process. First, a global standard from the World Wide Web Consortium (W3C) allows people and organizations to create decentralized identifiers (DIDs), which they can cryptographically prove they control. Then, using these DIDs, they can add digital credentials that contain relevant identity information—like a government ID, bank account, or passport which make it easy to present their information digitally to be verified by other entities, independently, without intervention from federated systems.

Verifiable Credentials are a special type of digital credential that offer a powerful and efficient way to issue, share, and verify important data. What sets them apart is that the data is digitally signed by the trusted issuer, ensuring its origin and authenticity can be instantly verified using simple software—without needing logins, passwords, or checking against a database. Since you hold your own data, you can choose when to share it, solving a key issue in data privacy regulation: lack of consent. Plus, some Verifiable Credentials let you selectively share only the necessary information or use privacy-preserving features. And if anyone tries to alter the credential after it’s issued, the change is easy to spot during verification.

The combination of DIDs and Verifiable Credentials means that you can always be certain of the source of a credential and that the data in the credential hasn’t been altered.

The Advantages of Decentralized Identity with Verifiable Credentials

User Control and Privacy: In a decentralized identity system, individuals have full control over their credentials. They decide which pieces of information to share and with whom. This is in contrast to federated identity, where large identity providers mediate these transactions. Decentralized identity systems enable self-sovereign identity (SSI), meaning users have complete autonomy over their personal data. Improved Privacy through Selective Disclosure: Verifiable Credentials allow for selective disclosure, where users can prove certain facts (like being over 18) without revealing unnecessary information (like a full birthdate). This significantly enhances privacy and minimizes the sharing of personal data compared to federated identity systems, where often more information than necessary is shared across services. No Single Point of Failure: Unlike federated identity, decentralized identity doesn’t rely on any single provider. This dramatically reduces the risk of losing access to services in the event of an account compromise or a provider outage. The use of distributed ledger technology means there is no central database that can be breached, making decentralized identity systems inherently more secure. Persistent Identity: When a credential issuer writes the metadata for a credential to be read to a distributed ledger, the actual identity it supports cannot be taken away. The immutability of data written to a distributed ledger means that a Verifiable Credential can always be verified. Important to note — only metadata for the credential, the data to perform cryptography, is written to the ledger. No personal data goes on the ledger. Added Security: When you don’t have to store personal data on a database to manage identity authentication and access, it can’t be stolen. It’s as simple as that. Another huge benefit — you can access accounts or systems without having to use passwords. And if you want the ultimate in security, you can issue biometrics as Verifiable Credentials. This means that when a person performs a biometric scan, they simultaneously present a biometric template in a Verifiable Credential, and the scan is compared with the template. This effectively binds biometric data to a person and can be used to prevent generative AI deepfakery. Efficiency and Convenience: While federated identity simplifies login processes by allowing users to access multiple services with one account, decentralized identity goes even further. Once verifiable credentials are issued, they can be reused across different services without having to rely on a third-party identity provider for each transaction. This speeds up verification processes and reduces reliance on external parties.

Why Decentralized Identity and VCs Are the Future

Decentralized identity, powered by verifiable credentials, represents a paradigm shift in how we manage identity online. By addressing the security, privacy, and efficiency challenges inherent in centralized and federated systems, decentralized identity offers a more robust solution that traditional identity systems cannot match. By eliminating the need for centralized identity providers and reducing the risk of data breaches, decentralized identity systems offer a more secure and private way to manage digital identities. Moreover, they deliver a more seamless and user-friendly experience by enabling users to reuse credentials across services without intermediaries.

In an increasingly interconnected world, decentralized identity and VCs pave the way for a more secure, private, and user-centric digital future.

Visit Indicio for more information on decentralized identity and verifiable credentials. Or contact us to find out how your organization can boost your digital identity programme.

###

Suggested reading:

Beginners guide

What are Verifiable Credentials? (With Pictures!)

What is DIDComm? (With Pictures!)

How verifiable credentials disrupt online fraud, phishing, and identity theft

 

 

 

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management appeared first on Indicio.


Lockstep

It’s safe to assume AIs can at least read. Isn’t it?

What do you think Large Language Models do? It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all,... The post It’s safe to assume AIs can at least read. Isn’t it? appeared first on

What do you think Large Language Models do?

It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all, it’s more a cognitive faculty than a “theory”).

It’s a powerful instinct. And, like other instincts that evolved for a simpler life on the savannah, Theory of Mind can tend to over-do things. It can lead us to intuit, falsely, that all sorts of things are alive (anyone remember the Pet Rock craze?) It seems Theory of Mind leads to “psychological illusions” just as our pre-wired visual cortex leads to optical illusions when we hit it with unnatural inputs. And so some people go so far as to feel that LLMs are sentient.

But most of us are probably wise to the impression that AIs give of being life-like.

So, what do LLMs really do?

Surely it’s safe to presume that a Large Language Model can at least read? I mean, their very name suggests that LLMs have some kind of grasp of language. Any fool can see they ingest text, interpret it and describe what it means. So that means they’re reading, right?

Well, no, AIs don’t even do that.

Check out this short explainer by the wonderful @albertatech on Instagram, of a howler made by all LLMs when asked “How many Rs are in the word strawberry?”.

Peoples’ mental models of AI are hugely important. The truth is that AIs lack anything even close to self-awareness. They cannot reflect on the things they generate and why. They have no inner voice that applies common sense to filter right and wrong, much less a conscience to sort good and bad. This makes AIs truly alien creatures, despite their best impressions.

Their failure modes are not even random (with apologies to Wolfgang Pauli). Society has no institutional mechanisms to deal with AIs’ deeply weird failures and yet we’re letting them drive on our public roads.

We casually talk about AIs “reading” and “writing”. We see them “seeing”; we interpret their outputs as “interpretations”.

These are all metaphors, and they’re wildly misleading.

The post It’s safe to assume AIs can at least read. Isn’t it? appeared first on Lockstep.


KuppingerCole

Cloud Security - Problem Solved? No!

by Osman Celik Cloud computing is an essential tool for organizations of all sizes, from small businesses to large enterprises. However, as cloud adoption continues to accelerate, securing cloud environments has always remained a major challenge. Today, organizations still face significant difficulties in protecting their data and resources in the cloud. One of the main reasons is the complexity

by Osman Celik

Cloud computing is an essential tool for organizations of all sizes, from small businesses to large enterprises. However, as cloud adoption continues to accelerate, securing cloud environments has always remained a major challenge. Today, organizations still face significant difficulties in protecting their data and resources in the cloud. One of the main reasons is the complexity of cloud environments and the shared responsibility model, which distributes security duties between the cloud provider and the user. Many organizations still struggle to understand where their cloud security responsibilities begin and end. The lack of clarity continues to leave cloud environments exposed to a wide range of vulnerabilities.

Organizations that operate in highly regulated industries, such as healthcare, finance, and government, are particularly vulnerable to cloud security challenges. These sectors deal with large amounts of sensitive data, such as personal information, financial records, and healthcare data. This makes them the prime targets for cybercriminals. Additionally, these industries face strict regulatory requirements that further complicate their cloud adoption. While larger organizations may have the resources to invest in advanced tools and hire experts, some small and medium-sized enterprises (SMEs) face challenges in implementing necessary security measures due to limited resources.

Cloud Security Challenges in 2024

In 2024, challenges like data breaches, misconfigurations, insider threats, regulatory compliance issues, third-party risks, and insufficient identity and access management (IAM) continue to be the top cloud security concerns for organizations. Data breaches remain one of the most significant risks because of the high volume of sensitive data stored in the cloud. Attackers can easily exploit weak security measures and vulnerabilities to gain unauthorized access to confidential data. Misconfigurations, such as exposing databases to the public without proper encryption, are also common and frequently result in massive data leaks.

The complexity of cloud environments contributes to the human factor, which in turn leads to insider threats, as employees may overlook some of the critical security measures. Whether intentional or accidental, insiders can cause severe damage by accessing sensitive data, misusing credentials, or exposing systems to cybercriminals. Regulatory challenges add another layer of complexity, as organizations must comply with regional and/or global compliance requirements, such as the General Data Protection Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), or the Health Insurance Portability and Accountability Act (HIPAA). Ensuring regulatory compliance in cloud environments can be resource intensive and expensive. As many organizations depend on external vendors and cloud service providers to handle critical parts of their infrastructure, they are also often exposed to third-party risk. When one of these third parties is compromised, it can lead to security incidents across the entire ecosystem.

Lack of adequate IAM practices increases the risk of security breaches in cloud environments, given the role of managing user access to the resources. Weak IAM policies lead to unauthorized access and allow attackers to exploit accounts and passwords. Lack of multi-factor authentication (MFA) also poses a risk of intrusions into cloud systems. These IAM-related vulnerabilities highlight the need for organizations to enforce strict access controls and regularly audit user permissions to ensure they are in line with the principle of least privilege.

The Financial Impact of Security Incidents is Alarming

According to IBM's 2024 "Cost of a Data Breach" report, the global average cost of a data breach in the cloud was $4.88 million per incident, with the healthcare industry experiencing the highest average costs at $9.77 million per breach. Additionally, misconfigurations were estimated to have cost organizations over $3.18 trillion in 2023, due to the combined expenses of lost revenue, remediation efforts, and regulatory fines. These figures highlight the financial impact that cloud security failures can impose.

Hybrid Cloud is still an Option

Cloud security concerns are still a significant factor preventing some organizations from fully embracing cloud technology. While many businesses recognize the benefits of moving to the cloud, security concerns often lead to delayed adoption of cloud systems. In some cases, organizations delay cloud migration or implement hybrid solutions. Such organizations often store critical data on-premises while only shifting non-sensitive data to the cloud. This approach allows them to maintain greater control over their most valuable assets but limits the full potential of cloud-based innovation.

Enhance Your Cloud Protection through Advanced Security Strategies

With employees and devices accessing cloud resources from anywhere, Zero Trust assumes that threats could arise both inside and outside the network. The Zero Trust model enforces a "never trust, always verify" approach, ensuring that all users, devices, and applications are continuously authenticated and authorized before accessing resources.

AI and ML automate threat detection, analysis, and response actions. These technologies can also process enormous volumes of data in real-time, enabling security systems to detect anomalies and malicious activities much faster than human analysts. By learning from patterns in cloud traffic and user behavior, AI and ML can anticipate potential cloud security threats and act proactively. However, these technologies are not risk free. Attackers can also use them to launch more advanced attacks that learn how to bypass security systems.

Automated compliance management tools facilitate the monitoring of cloud environments, generate compliance reports, and alert users to any potential violations. These solutions reduce the manual effort required for audits and ensure that organizations stay up to date with changing regulatory standards.

Cloud Security Posture Management (CSPM) solutions address misconfigurations and maintain strong security hygiene across cloud environments. CSPM tools monitor cloud configurations to identify risks such as exposed storage buckets, insecure firewall settings, or overly permissive access controls. Misconfigurations are one of the most common causes of cloud security breaches, and CSPM helps organizations detect and remediate these issues before they can be exploited. As more organizations adopt multi-cloud or hybrid cloud strategies, CSPM provides the visibility and control needed to secure these complex environments.

We are Back in Town - cyberevolution 2024

We are excited to invite you to our cyberevolution event in Frankfurt am Main on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cloud Security will be one of the big topics on the agenda.

Here are some sessions that might catch your interest:

Cloud Application Security from CNAPP to AINAPP The Cloud Conundrum: Balancing Agility with Security Security at Scale - Mastering Cloud Security in the Cyberwar Era

You can also check out our published Leadership Compasses below:

Leadership Compass – Zero Trust Network Access (ZTNA) Leadership Compass – Cloud Security Posture Management (CSPM) Leadership Compass – Cloud Native Application Protection Platforms (CNAAP)

Lockstep

Money, the Metaverse and David Birch (Making Data Better EP15)

George and I had a virtual blast recently on our podcast with David Birch. As an adviser and global raconteur in payments, identity and digital transformation, Dave needs little introduction. With Meeco COO Victoria Richardson, he has just co-authored a fascinating book, Money in the Metaverse: Digital assets, online identities, spatial computing and why virtual... The post Money, the Metaverse

George and I had a virtual blast recently on our podcast with David Birch. As an adviser and global raconteur in payments, identity and digital transformation, Dave needs little introduction. With Meeco COO Victoria Richardson, he has just co-authored a fascinating book, Money in the Metaverse: Digital assets, online identities, spatial computing and why virtual worlds mean real business.

Dave took us into their thinking about secure, private transactions in the metaverse(s).

Virtual money makes the virtual world go around

Dave was drawn to write a new book after finding it strangely clunky to pay for things in at least one virtual world.

He told us about being at an industry event with lots of people “walking around as avatars and meeting each other”. That all seemed real enough until he wanted to buy something. He had to come out of the metaverse and undergo an all-too-real payment rigmarole—scanning a QR code, then another website, typing in card details—before he could rejoin the virtual fun.

Surely, he thought, “I should be doing things inside the metaverse instead of taking off my VR glasses!”. He enlisted Victoria as co-author, who he describes as a “brilliant digital strategist” with a proper framework for thinking about these things.

The state of the art in self-contained metaverse commerce is all about DeFi, Web 3, tokenisation and cryptocurrency.  Loudly sceptical about these things IRL, Dave says “there’s absolutely no doubt” they will form “the next generation financial market infrastructure”.

Dave has an optimistic and generous view of the metaverse. “It’s early days” (of course) yet he is confident that the metaverse’s many pioneers will continue to refine and innovate and surprise us, taking AR/VR technology in new directions.

He likens Apple’s Vision Pro headset to the Apple Newton of the late 1990s. It wasn’t attractive to typical consumers either, but over time, everyone saw that the Newton was the prototype iPad.  So who’s to say where the Vision Pro will lead?

And I should add that Dave does not think $3,000 for a Vision Pro is unreasonable.

In this blog, I’m going to go deep once more on authenticity in the metaverse (I’ve previously looked at how the metaverse should force a rigorous re-examination of digital identity).

But first, here’s a sample of the areas George and I covered with Dave (don’t forget to take a listen):

In less than 45 minutes, we traversed gaming, brand marketing, car insurance, banking, newspapers and print media, comedy, concert tickets, adult services, COVID, teenage mental health, and virtual girlfriends and boyfriends. Digital says Dave is “the natural UX for young people today. It’s how they meet their friends, how they socialize, how they connect. So, in a very short time, brands are going to need to be in those spaces as well.” On ownership and tokenisation: “[The] amount of effort that’s already going into the proto-metaverses is substantial, but it’s hamstrung by the fact that the things that they build aren’t theirs. They belong to the platform.” On economics, in-built platform security is such an imperative that Dave and Victoria see virtual worlds as potentially safer and more efficient than the real world. As a result, transaction costs will fall, and businesses in all sectors will feel pressure to move into the metaverse. Real authenticity

When we turned to authenticity, Dave set the scene as follows:

“Of course, in the metaverse nothing’s real, putting to one side what real means … we certainly don’t want the metaverse to end up in the mess that we’re in at the moment with the internet where we see fake [TV personalities] shilling cryptocurrency”.

Cryptographic security must be “part of the warp and weft” of a new infrastructure, in a way that we simply overlooked in the rush to Web 1 and Web 2.  Dave points out that a whole “panoply of keys, key generation, certificates, digital signatures and encryption” was missing from the internet.  He is a forceful champion of security being inherent to the infrastructure; on this point he calls himself a “maximalist”.

What would such security look like? Well, we might not even notice it. Crucially, Dave does not imagine us having to prove our bona fides by showing pictures of virtual driver licences. I agree; it would be moronic to simulate a superficial verification process when it is so bad in real life.

Instead, Dave foresees metaverse platforms just knowing your authorisation attributes and applying them to covertly regulate your virtual experience. So, if for example you’re not 18 years old and you approach an age-restricted venue or event, then you won’t even have the option of going in.

“In any metaverse I’d want to take part in, if a photo doesn’t have a digital signature that says ‘this comes from the New York Times’ or ‘from George Peabody’, I don’t want to even see it.”

So, one crucial distinction he sees between the metaverse and any virtual world built so far on the internet, is that authenticity will be part of the infrastructure.

In a sense, everything in a Dave Birch metaverse will be real!

Questions

A simulated world in which everything we see is true could save digital civilisation. But we need to approach any Utopia with caution.

What’s real in an unreal world? What is truth? If the answer is everything’s relative, then authenticity will need to be configurable.

Beauty is in the eye of the beholder, and authenticity in the metaverse needs to be in the hands of the beholder as well.

The point of the metaverse is to shift reality. If users have any freedom to adjust what’s real, then they will need to set their own authenticity standards. I might for example be able to have the BBC determine what political stories are true as far as I am concerned and have New Yorker film critics control my cinema experience.

Inevitably, beneath any metaverse, are the unseen platforms. As we discussed with Dave, platforms have had most of the control so far. Dave calls for a shift in control and asset ownership from landlords to denizens.

There are many privacy issues. If a metaverse platform knows my personal attributes and applies them to shape my virtual experience (such as removing pubs and clubs from my experience if I am under-age) then the platform must be watching what I am trying to do around the clock.

I guess that’s a price users could pay for the seamlessness of having the world “know” them without having to see a virtual ID card. That trade-off might be perfectly fine—if we trust the platforms, and/or they closely regulated.

If metaverses even come close to mimicking the richness of the real world, the platforms will have unprecedented executive control over our activity. They will literally direct what we experience and even how we behave, because the platforms’ software will mediate our very existence in the worlds.

Is the metaverse going to need benign meta-dictators?

More on Money in the Metaverse

Reviewed by Irish Tech News, May 2, 2024.

Dave was interviewed on the Pay it Forward podcast, June 28, 2024.

Victoria and Dave were interviewed on The Banker, July 10, 2024.

 

 

The post Money, the Metaverse and David Birch (Making Data Better EP15) appeared first on Lockstep.


Tokeny Solutions

21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets

The post 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets appeared first on Tokeny.

LUXEMBOURG, 10 September 2024 – 21X and Tokeny have announced today that they have signed a strategic partnership as they look to revolutionize capital markets. Having developed the very first DLT trading and settlement system (DLT TSS) under the European Union’s DLT regime, 21X is teaming up with Tokeny – the leading onchain finance operating system – to allow issuers using Tokeny’s white-label platform or APIs to admit financial instruments to trading on 21X.

21X’s smart contract-based trading venue allows participants for the first time to undertake fully regulated trading of financial instruments according to the EU DLT Regime. 21X is working with a number of tokenization companies to permit matching, trading and settlement of tokenized assets – and now includes Tokeny, the leading tokenization platform.

As part of its collaboration, Tokeny connects DINO, the distribution network for tokenized real-world assets (RWA) and securities with 21X’s market infrastructure to foster liquidity and tradability of ERC-3643-based assets. Acting as the interoperable distribution network for tokenized assets, DINO plays a pivotal role in the digital asset ecosystem with an extensive reach of over 50 liquidity platforms, providing the flexibility for ERC-3643-based tokenized securities to be listed and traded seamlessly across any of these blockchain-based channels.

Tokeny and 21X are enhancing compatibility between ERC-3643 tokens and the DLT trading and settlement system of 21X. By providing access to 21X, Tokeny’s customers will be able list their assets on an ESMA-regulated secondary market, ensuring end-to-end compliance for issuers and investors. Meanwhile, clients of 21X gain access to Tokeny’s white label tokenization solutions, providing them with the ability to issue, manage, and distribute tokenized securities with a no-code platform while expanding their investor base through liquidity pools and participants within the DINO distribution network.

21X is partnering with Tokeny, one of the world’s leading asset tokenization providers, with over 120 customers and almost $28 billion in assets tokenized, to date. Everybody is working hard to have all the elements of our digital asset ecosystem in place to go-live by the end of 2024. This includes building strong partnerships with the likes of Tokeny and we are looking forward to their customers’ digital assets being available to trade on 21X as soon as our exchange begins operating. Max J. HeinzleFounder & CEO of 21X We're excited to team up with 21X to not only allow our customers to list tokenized securities on 21X but also to collaborate in building liquidity rails by expanding the DINO distribution networks to support ERC-3643 token issuers. Together, we are laying the foundation for the future of tokenization. Luc FalempinCEO Tokeny About 21X

21X is a Frankfurt-based fintech, developing a blockchain-powered exchange for tokenized assets, which will operate under the regulatory supervision of the European Securities and Markets Authority (ESMA).

With the institutional adoption of tokenized securities, 21X is ideally positioned to enable smart contract-based issuance, trading and settlement of tokenized stocks, bonds and funds. 21X has submitted its license application to operate a DLT trading and settlement system (DLT TSS) and is expected to be one of the first companies authorized to operate under the EU DLT regime.

See the short explainer video on 21X and our blockchain-based exchange here.

About Tokeny

Tokeny provides the leading onchain finance operating system, leveraging market standards like ERC-3643, to bring control, compliance, and efficiency in the era of open finance. It enables seamless issuance, transfer, and management of tokenized securities. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets appeared first on Tokeny.


IDnow

IDnow’s YRIS solution obtains Substantial Level of Assurance for digital identities according to eIDAS

With the latest certification of French Cybersecurity Agency (ANSSI), YRIS is now eligible to be featured on FranceConnect+ Munich/Rennes, September 10, 2024 – IDnow, a leading identity verification platform provider in Europe, has received the security Visa from French Cybersecurity Agency (ANSSI) recognizing the Substantial Level of Assurance (LoA) certification for digital identities for its […]
With the latest certification of French Cybersecurity Agency (ANSSI), YRIS is now eligible to be featured on FranceConnect+

Munich/Rennes, September 10, 2024 – IDnow, a leading identity verification platform provider in Europe, has received the security Visa from French Cybersecurity Agency (ANSSI) recognizing the Substantial Level of Assurance (LoA) certification for digital identities for its YRIS digital identity wallet. The LoA is defined by the European eIDAS regulation (electronic Identification, Authentication and Trust Services) and was certified by the Agence nationale de sécurité des systèmes d’information (ANSSI).

Seamless reuse of verified digital identity credentials

YRIS was first launched in June 2022 and allows the seamless reuse of verified digital identity credentials. It enables users to easily and securely prove their identity without having to scan a physical ID document and their face each and every time access to a service is needed. The strength of YRIS also lies in the fact that it allows all French citizens to create this digital identity based on the old French national ID card, the new national ID card, and the residence permit.

Today, more than 450,000 users in France are using YRIS in their day-to-day lives via FranceConnect, the national digital identity federator, where users authenticate or identify themselves for eGovernment and other regulated services in France. The new certification also qualifies YRIS to be featured on FranceConnect+, and would thus make another digital identity provider available on the platform.

FranceConnect+ is similar to FranceConnect but its Substantial LoA provides an eIDAS node that will permit mutual recognition of French citizens on services in other European Union member states with their French digital identity. It can be used to carry out administrative procedures with more stringent user identification requirements, such as using training credits, obtaining subsidies, etc. It can also be used to generate qualified electronic signatures, to send or receive electronic registered mail, and to meet identification requirements for financial transactions subject to AML-CTF regulations.

Authentication and verification in financial services, insurance, HR sectors and electronic registered mail

Besides possible integration on FranceConnect+, YRIS can also be used for proof of identity and as a secured method of strong authentication in the financial or insurance industries, and in human resources. Several use cases, such as financial account opening, insurance contracts, loans or rental agreements, can now be processed via YRIS thanks to the new Substantial LoA. Based on the eIDAS regulation, YRIS can also be used by providers of electronic registered mail services as a compliant method for identifying the recipient, a promising market for mail replacement.

“This certification is the latest company milestone for IDnow, which remains committed to playing a key role in Europe’s ambition to create and offer a single, reliable and secure digital identity to its citizens and residents,” says Marc Norlain, Managing Director and Head of the Reusable Identities Unit at IDnow.

“With their reusable digital identities, end users in France will be able to open a bank account or carry out any banking operation, perform a qualified electronic signature, open an online gaming account, or send or receive an electronic registered letter. We are at a pivotal moment in the digital identity ecosystem in France and Europe overall and IDnow is proud to lead the way with our expertise and our proven solutions.”


Veridium

Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge

Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge   We’re excited to announce that Veridium will be joining forces with our strategic partner IGEL at IGEL Disrupt 2024! This flagship event is the premier gathering for cloud workspaces and digital transformation enthusiasts, and we can’t wait to showcase how Veridium’s cutting-edge identity […]
Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge

 

We’re excited to announce that Veridium will be joining forces with our strategic partner IGEL at IGEL Disrupt 2024! This flagship event is the premier gathering for cloud workspaces and digital transformation enthusiasts, and we can’t wait to showcase how Veridium’s cutting-edge identity authentication solutions complement IGEL’s advanced edge computing environments.   As a pioneer in revolutionizing user identity security, Veridium empowers organizations to enhance their security posture through our Identity Assurance Platform. By reliably verifying user identities and devices, we ensure that your digital workspaces are protected by AI-based identity threat protection and continuous authentication. Our platform addresses a fundamental security challenge: accurate and secure user authentication from start to finish—across virtual desktops, cloud workspaces, and beyond.   Veridium’s platform integrates seamlessly with existing Identity/SSO providers, while extending security to ZTNA, MDM, and EDR solutions. We offer the widest range of authenticators on the market, including passwordless and phishing-resistant options, FIDO tokens, and patent-protected biometric solutions (such as contactless fingerprints, facial recognition, and behavioral biometrics). Whether your organization is beginning its identity and access management (IAM) journey or refining mature processes, Veridium ensures consistent, secure authentication that keeps pace with evolving threats.   At Disrupt 2024, join us to discover how Veridium and IGEL are transforming secure access for the modern digital workspace. Experience our live demos and hear from our experts on how we’re enabling secure, seamless, and scalable solutions across VDI and DaaS environments.   Special Offer: Use coupon code DISRUPT24EXCLUSIVE to get your ticket for just 120 Euros!   Read our Data Sheet to learn more about our IGEL integration! Stay tuned for updates, and we look forward to seeing you at IGEL Disrupt 2024!

PingTalk

Ping Identity: Leading the Future of Passwordless Authentication

Eliminate passwords and user friction with Ping Identity. Learn why we're leaders in passwordless authentication in the latest Leadership Compass report.

Passwords are a security nightmare and are the biggest cause for user friction. However, getting rid of them in your environment may need a platform approach. The latest Leadership Compass report on Passwordless Authentication for Enterprises highlights Ping Identity as a leader in this space. Here's an in-depth look at why Ping Identity stands at the forefront of passwordless authentication for enterprises.


What is Banking as a Service (BaaS)?

Understand Banking as a Service (BaaS), its relation to embedded finance, and crucial identity security practices for providers.

Banking as a service (BaaS) is a model that allows non-bank businesses to offer financial services by integrating banking capabilities directly into their own products. This article will explain BaaS, how it works, and why identity and access management (IAM) solutions are necessary for earning trust. You'll also learn how IAM, including both customer identity and access management (CIAM) and workforce identity, enables BaaS to function securely and efficiently.


Okta

Secure OAuth 2.0 Access Tokens with Proofs of Possession

In OAuth, a valid access token grants the caller access to resources and the ability to perform actions on the resources. This means the access token is powerful and dangerous if it falls into malicious hands. The traditional bearer token scheme means the token grants anyone who possesses it access. A new OAuth 2.0 extension specification, Demonstrating Proof of Possession (DPoP), defines a standa

In OAuth, a valid access token grants the caller access to resources and the ability to perform actions on the resources. This means the access token is powerful and dangerous if it falls into malicious hands. The traditional bearer token scheme means the token grants anyone who possesses it access. A new OAuth 2.0 extension specification, Demonstrating Proof of Possession (DPoP), defines a standard way that binds the access token to the OAuth client sending the request elevating access token security.

The high-level overview of DPoP uses public/private keys to create a signed DPoP proof that the authorization and resource server use to confirm the authenticity of the request and requesting client. This way, the token is sender-constrained, and a token thief is less likely to use a compromised access token. Learn more about the problems DPoP solves and how it works by reading:

Elevate Access Token Security by Demonstrating Proof-of-Possession

Protect your OAuth 2.0 access token with sender constraints. Learn about possession proof tokens using DPoP.

Alisa Duncan

The primary use case for DPoP is for public clients, but the spec elevates token security for all OAuth client types. Public clients are applications where authentication code runs within the end user’s browser, such as Single-Page Applications (SPA) and mobile apps. Due to their architecture, public clients inherently have higher risk and less security in authentication and authorization. Public clients can’t leverage a client secret used by application types that can communicate to the authorization server through a “back-channel,” a network connection opaque to users, network sniffing attackers, and nosy developers. Without proper protection, a SPA may store tokens accessible to the end-user and injection-related attacks. DPoP adds an extra protection layer that makes tokens less usable if stolen.

Table of Contents

Get the starting Angular, React, or Vue project Add OAuth 2.0 and OpenID Connect (OIDC) to your application Configure OAuth scopes for Okta API calls Inspect the OAuth 2.0 bearer tokens and request resources manually Use secure coding techniques to protect your web apps Migrate your SPA to use DPoP Trace the token request requiring a DPoP nonce Request resources using DPoP headers Manually request DPoP-protected resources Store cryptographic keys in browser applications Use modern evergreen browsers for secure token handling Learn more about web security, DPoP, and OAuth 2.0

In this post, you’ll experiment with DPoP and step through migrating a public client application using OAuth bearer tokens compared to DPoP tokens. We’ll build upon the existing OAuth 2.0 Authorization Code flow. Need a refresher? Check out this post:

How Authentication and Authorization Work for SPAs

Authentication and authorization in public clients like single-page applications can be complicated! In this post, we'll walk through the Authorization Code flow with Proof Key for Code Exchange extension to better understand how it works and what do with the auth tokens you get back from the process.

Alisa Duncan

Note

This code project is best for developers with web development experience, knowledge of debugging network requests and responses, and familiarity with OAuth and OpenID Connect (OIDC).

The post uses Angular, but you can follow the concepts and network calls using a sample project in your favorite SPA framework. Check out samples using React or Vue. You’ll need to make a couple of minimal changes to the code. I will call out the changes, but I will not post the specific code or instructions.

Are you following the step-by-step code instructions in Angular? This post assumes you already have Angular knowledge. If you are an Angular newbie, start by building your first Angular app using the tutorial created by the Angular team.

A hands-on project requires tools for local web development.

Prerequisites

You’ll need the following tools:

Node.js v18 or greater A web browser with good debugging capabilities, such as Chrome Your favorite IDE. Still looking? I like VS Code and WebStorm because they have integrated terminal windows. Terminal window (if you aren’t using an IDE with a built-in terminal) Git and an optional GitHub account if you want to track your changes using a source control manager An HTTP client that shows the HTTP requests and responses, such as the Http Client VS Code extension or curl Get the starting Angular, React, or Vue project

You’ll use a starter project. These instructions are for the Angular sample project. If you are following along in React or Vue, replace the GitHub repo location with the URL for the sample you’re using.

Open a terminal window and run the following commands to get a local copy of the project in an okta-client-dpop-project directory and install dependencies. Feel free to fork the repo so you can track your changes.

git clone https://github.com/oktadev/okta-angular-dpop-example.git okta-client-dpop-project cd okta-client-dpop-project npm ci

Open the project in your favorite IDE. The project includes Okta’s client authentication SDKs, a sign-in button, a profile route that displays user information by calling the OIDC /userinfo endpoint, and a route that makes a call to Okta’s Users API. Both HTTP requests require an access token, so we’ll follow the requests and responses for these two calls.

React and Vue project instructions

React and Vue projects need a couple of changes. Change the profile component to call oktaAuth.token.getUserInfo() and display the JSON output. Add a call to Okta’s User API /api/v1/users. You’ll replace the domain name later. You may want to create a new Users component (and route) to match the Angular sample.

Use the SDK reference docs for React and Vue.

You need to set up an authentication configuration to serve the project. Let’s do so now.

Add OAuth 2.0 and OpenID Connect (OIDC) to your application

You’ll use Okta to handle authentication and authorization in this project securely. Okta APIs have built-in DPoP support — how secure and handy! We’ll experiment with DPoP in the client application by calling Okta’s APIs.

React and Vue project instructions

Replace the two redirect URIs to match the port and callback route for the application. You’ll find the URI for both in your project’s README file. Follow the instructions in the README to add the issuer and client ID to the app. Use the format for the issuer. Notice this is different from the starter code.

Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create. Select the default app name, or change it as you see fit. Choose Single-Page App and press Enter.

Use http://localhost:4200/login/callback for the Redirect URI and set the Logout Redirect URI to http://localhost:4200.

What does the Okta CLI do?

The Okta CLI will create an OIDC Single-Page App in your Okta Org. It will add the redirect URIs you specified and grant access to the Everyone group. It will also add a trusted origin for http://localhost:4200. You will see output like the following when it’s finished:

Okta application configuration: Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta Admin Console to create your app. See Create an Angular App for more information.

Note the Issuer and the Client ID. You’ll need those values for your authentication configuration, which is coming soon.

There’s one manual change to make in the Okta Admin Console. Add the Refresh Token grant type to your Okta Application. Open a browser tab to sign in to your Okta developer account. Navigate to Applications > Applications and find the Okta Application you created. Select the name to edit the application. Find the General Settings section and press the Edit button to add a Grant type. Activate the Refresh Token checkbox and press Save.

Leave the Okta Admin console open. You’ll continue making changes in there.

I already added Okta Angular and Okta Auth JS libraries to connect our Angular application with Okta authentication.

In your IDE, open src/app/app.config.ts and find the OktaAuthModule.forRoot() configuration. Replace {yourOktaDomain} and {yourClientID} with the values from the Okta CLI.

Configure OAuth scopes for Okta API calls

We’re calling an Okta API, so we must add the required OAuth scopes.

In the Okta Admin Console, navigate to the Okta API Scopes tab in your Okta application. Find the okta.apps.read and okta.users.read.self and press the ✔️ Grant button for each.

Open the src/app/users/users.component.ts and find the call to list users: /api/v1/users. We’re taking shortcuts here, such as calling the API directly in the component for this demonstration project. In production-quality Angular apps, ensure you architect your application following best practices so you can add automated tests and troubleshoot issues quickly.

Replace {yourOktaDomain} with your Okta domain.

React and Vue project instructions

Add the two scopes to the OIDC configuration for the application. Search for “scopes” and change the array to

scopes: ['openid', 'profile', 'email', 'offline_access', 'okta.users.read.self', 'okta.apps.read'],

Replace the {yourOktaDomain} in the Okta Users API call you added in the prior section.

Start the app by running:

npm start

Open a browser tab to view the app. Open the debugging view that shows the console and network requests. Since I am using Chrome, I’ll open DevTools. Enable Preserve log in the Console and Network tabs. For the Console tab, you’ll find the preserve log option after opening the gear menu.

Let’s ensure you can sign in, call the /userinfo endpoint to see your user information, and call Okta Users API. You’ll use the Authorization Code flow and redirect to Okta for the authentication challenge. Once you emerge victorious by assuring the identity provider you are who you claim to be, the authorization server redirects you back to the application. The redirect URI includes the authorization code. Okta’s SDK (the OIDC client library) calls the /token endpoint to exchange the authorization code for tokens.

After you sign in, the Angular app will display routes for “Profile” and “Users.” Navigating these routes calls the /userinfo and Users API. If you can access the routes and don’t see any HTTP request errors, you’re good to go!

Inspect the OAuth 2.0 bearer tokens and request resources manually

After signing in, you have the OAuth 2.0 access token and the OIDC ID token. Okta stores the tokens in browser storage. In DevTools, open the Application tab to view browser storage data. Okta Auth JS defaults to local storage for tokens and is configurable based on your application needs. Expand Local storage, select the application, and expand the okta-token-storage key to see the tokens and token metadata. The tokenType property is Bearer.

Let’s see the API calls in action in the application. Navigate to both routes. In the Network tab, you see the initial /token, /userinfo, and Users API requests.

Let’s inspect the Users API request.

The request includes the Authorization header containing the token scheme and access token. You see the format Bearer <access_token>.

The entity holding the token can legitimately request resources. Let’s try using the token in another client and impersonating the actions an attacker can take if they manage to capture it.

Note

Access tokens expire quickly. If too much time passes in these next steps, you may get a 401 Unauthorized. If you do, repeat the steps with a more recent access token by navigating between the profile and user routes to trigger a call to the API. It prompts the OIDC client (the Okta Auth JS SDK) to update expired tokens.

Copy the token from the browser, and double-check you captured the entire token. Open your HTTP client and run the following HTTP request replacing {yourOktaDomain} and {yourAccessToken}:

GET /api/v1/users HTTP/1.1 Authorization: Bearer {yourAccessToken}

If you use curl, add the verbose flag to see the request and response headers:

curl -v --header "Authorization: Bearer {yourAccessToken}" /api/v1/users

The call succeeds even though the HTTP client isn’t the same client the authorization server issued the token to (the sample app).

Let’s call another endpoint with the same access token, the Okta Applications endpoint. Run the following HTTP request replacing {yourOktaDomain} and {yourAccessToken}:

GET https://alisa.oktapreview.com/api/v1/apps HTTP/1.1 Authorization: Bearer {yourAccessToken}

The call succeeds even though you call from a different client, like you saw in the prior step, calling the Users API. The call succeeds for a privileged user as long as the Okta Application has the okta.apps.read and the OIDC config has the scope. You may say that’s a lot of constraints, and you’re right. Okta adds a lot of guards when making API requests about the resources in the top-level Okta org, such as the list of Okta applications. This example demonstrates how powerful and vulnerable tokens issued for privileged users like admins are. Anyone with the token can make the same request, even if they are an attacker.

Back in the app, sign out to clear the authenticated session and tokens. We’re making changes that require you to sign in from scratch.

Use secure coding techniques to protect your web apps

All web applications must use secure coding techniques to protect from attacks, breaches, and malicious use. Public clients store their tokens within the user’s hardware and require thoughtful security practices. Read more about SPA web security and security practices within Angular in this four-part series:

Defend Your SPA from Security Woes

Learn the basics of web security and how to apply web security foundation to protect your Single Page Applications.

Alisa Duncan

It doesn’t matter if your application uses bearer tokens or DPoP; apps must employ secure coding practices. DPoP doesn’t prevent attackers from stealing your token but constrains its use. DPoP uses asymmetric encryption to prove token ownership, so you must avoid exfiltration or unauthorized use of the keyset. An attacker can create valid proofs if they get a hold of the private key.

Let’s migrate the application to DPoP and try making these HTTP requests again.

Migrate your SPA to use DPoP

Open the Okta Admin Console in the browser and navigate to Applications > Applications. Find the Okta application for this project. In the General tab, find the General Settings section and press Edit. Check the Proof of possession checkbox requiring the DPoP header in token requests. Press Save. Sign out of the Okta Admin Console.

If you try signing in again without making any code changes, you’ll see an error in the Network tab for the /token request:

HTTP/1.1 400 Bad Request { "error": "invalid_dpop_proof", "error_description": "The DPoP proof JWT header is missing." }

All HTTP requests to DPoP-protected resources (including the /token request) require proof. We must enable DPoP in the OIDC configuration.

The Okta Auth JS SDK has a configuration property for DPoP as part of the OIDC config. In your IDE, open src/app/app.config.ts and find the OktaAuthModule.forRoot() configuration. Add the dpop: true property. Your OIDC config will look something like this:

{ issuer: ..., clientId: ..., redirectUri: ..., scopes: ['openid', 'profile', 'offline_access', 'okta.users.read.self', 'okta.apps.read'], dpop: true }

Once the application rebuilds and reloads in the browser, make sure you have debugging tools open and then sign in.

Trace the token request requiring a DPoP nonce

When you sign in, you’ll see the initial call to the /token endpoint fails.

Take a look at the call’s request headers. You’ll see a header called DPoP, which contains the DPoP proof in JWT format, which means we can decode it and inspect its contents. You can use a trustworthy online tool such as JWT.io debugger or Base64 decode the header and payload sections of the JWT locally. In the JWT format, the content from the beginning up to the first .</kbd> character is the header, and the content between the two . characters is the payload.

The header contains the token type, dpop+jwt, the encryption algorithm, and the cryptographic key information tied to this proof. The payload includes minimal HTTP information and other properties to prevent token attack vectors.

{ "alg": "RS256", "typ": "dpop+jwt", "jwk": { /* Key information in JSON Web Key format */ } } { "htm":"POST", "htu":"/oauth2/v1/token", "iat":1724685617, "jti": "e84a...283bbf", }

Why did the initial call to /token fail? It’s because Okta requires an extra handshake that elevates security. The /token call requires a DPoP nonce that Okta provides included in the DPoP proof. In response to the first /token call, Okta returns the standard DPoP nonce error and the DPoP-Nonce response header containing the nonce the client incorporates into the proof.

HTTP/1.1 400 Bad Request DPoP-Nonce: "SVD....ubNc" { "error": "use_dpop_nonce", "error_description": "Authorization server requires nonce in DPoP proof." }

Okta’s Auth JS SDK has built-in support for DPoP-Nonce errors. Look at the DPoP proof token’s payload of the successful /token request. The payload includes the nonce returned in the first call.

{ "htm":"POST", "htu":"/oauth2/v1/token", "iat":1724685617, "jti": "e852...28396", "nonce":"SVD....ubNc" }

The token request succeeds, and we now have a DPoP access token.

Request resources using DPoP headers

In the app, navigating to view your profile succeeds because the SDK supports DPoP resource requests. You’ll see an error when navigating the “Users” route that calls Okta’s User API.

The HTTP response includes information about why the call errored.

HTTP/1.1 400 Bad Request WWW-Authenticate: Bearer authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.users.read.self", error="invalid_request", error_description="The resource request requires a DPoP proof.", resource="/api/v1/users"

The current code to make the Users API call adds the access token using the Bearer scheme in the Authorization header, but that’s incorrect for DPoP. We must incorporate the DPoP proof and change the scheme in the HTTP request.

Open the auth interceptor in the IDE. You can find the code in the src/app/auth.interceptor.ts file.

React and Vue project instructions

Find the code you added to request Users and incorporate the Angular instructions in the project to add the DPoP proof header and the DPoP scheme.

The interceptor has a check to ensure it adds the access token to allowed origins only. Change the interceptor code as follows:

export const authInterceptor: HttpInterceptorFn = (req, next, oktaAuth = inject(OKTA_AUTH)) => { let request = req; const allowedOrigins = ['/api']; if (!allowedOrigins.find(origin => req.url.includes(origin))) { return next(request); } };

We need the proof and the authorization header. We’ll generate both using Okta Auth JS. The SDK method requires the HTTP method and URI we intend to call. The URI shouldn’t include query parameters or fragments. The SDK method returns an object with properties matching headers and their values, so we can use the spread operator to populate the DPoP-required headers.

Change the interceptor to match the code below.

import { DPoPHeaders } from '@okta/okta-auth-js'; import { defer, map, switchMap } from 'rxjs'; export const authInterceptor: HttpInterceptorFn = (req, next, oktaAuth = inject(OKTA_AUTH)) => { // allowed origin check const url = new URL(req.url); return defer(() => oktaAuth.getDPoPAuthorizationHeaders({url: `${url.origin}${url.pathname}`, method: req.method})).pipe( map((dpop: DPoPHeaders) => req.clone({ setHeaders: { ...dpop } })), switchMap((request) => next(request)) ); };

Now, if you sign in and call the Users API, you’ll get the list of users in your Okta org using DPoP.

Manually request DPoP-protected resources

Earlier, we pretended to steal the access token to make other resource requests. You called the Okta Apps API using a JWT token to see the list of all the apps your Okta org contains. What happens if we try this again when the API requires DPoP?

In DevTools, open the Network tab and find the /users call. You need both the proof and the access token for your HTTP call. Make an HTTP request:

curl -v --header "Authorization: DPoP {yourAccessToken}" --header "DPoP: {yourDPoPProof}" /api/v1/apps

The API rejected your request! You get back an error stating the DPoP proof isn’t valid:

HTTP/1.1 400 Bad Request WWW-Authenticate: DPoP algs="RS256 RS384 RS512 ES256 ES384 ES512", authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.apps.read", error="invalid_dpop_proof", error_description="'htu' claim in the DPoP proof JWT is invalid."

If an attacker manages to capture both the proof and the token, they may only be able to make the same request. The proof constrains the calls to the HTTP method and URI, invalidating other HTTP requests.

How about making the same request?

curl -v --header "Authorization: DPoP {yourAccessToken}" --header "DPoP: {yourDPoPProof}" /api/v1/users

The API rejected your request! You still get back an error stating the DPoP proof isn’t valid:

HTTP/1.1 400 Bad Request WWW-Authenticate: DPoP algs="RS256 RS384 RS512 ES256 ES384 ES512", authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.users.read.self", error="invalid_dpop_proof", error_description="The DPoP proof JWT has already been used.", resource="/api/v1/users"

The proof also has two other protection mechanisms: the JWT unique identifier (jit) and the issued at time (iat). When a resource server enforces the jit claim, it tracks previous calls to prevent proof reuse. So, an attacker can’t replay the proof and the access token they stole. Enforcing the JWT ID isn’t required in the DPoP spec. Another protection mechanism is the proof issue timestamp, the iat claim. Resource servers check the issue time on the proofs, and if it exceeds some threshold determined by the resource server, the server will reject the request.

Store cryptographic keys in browser applications

We must securely store the keyset within the SPA and prevent an attacker from exfiltrating them. If an attacker has the keyset, they can impersonate you and make DPoP-protected calls. Fortunately, Okta SDK uses a few different techniques to mitigate keyset hijacking without any extra coding on your part.

Local and session storage aren’t secure enough; this time, we’ll rely on IndexedDB storage. The typical use case for IndexedDB is storing a large volume of data, but it has some built-in security mechanisms that work well for protecting the keyset. The SubtleCrypto API supports generating non-exportable keys, preventing browser code from turning the private key into a portable format. IndexedDB stores the keys as a CryptoKeyPairs object and DB query results return a reference to the object, not the raw key. IndexedDB protects sensitive private keys but still works with the WebCrypto methods for signing proof.

You can inspect the keys by following the steps:

Navigate to the Applications tab in DevTools Expand IndexedDB under the Storage sidenav Expand OktaAuthJs > DPoPKeys

The downside is that the IndexedDB API is more difficult to use than other browser storage APIs. Because IndexedDB data persists, we must clean up the keys when done manually. The SDK handles cleanup if the user explicitly signs out, but we can’t guarantee a user always will. We can clear keys before signing in.

Open src/app/app.component.ts to find the signIn() method.

React and Vue project instructions

Find the code where the project calls the signInWithRedirect() method and follow the instructions described for Angular projects.

Add the call to clear keys as the first step in the signIn() method:

public async signIn() : Promise<void> { await this.oktaAuth.clearDPoPStorage(true); await this.oktaAuth.signInWithRedirect(); } Use modern evergreen browsers for secure token handling

Creating and storing cryptographic keys in JavaScript apps requires a capable browser. Modern, evergreen browsers have the API support required for DPoP. Check browser capability if your app supports users who use less modern, more questionable browsers. The Auth JS SDK has a method to check browser capability, authClient.features.isDPoPSupported(). You can add this check during application bootstrapping or initialization.

Remember, even if you aren’t using DPoP, modern browsers have more built-in security mechanisms. Stay secure, stay updated, and use safe browser practices whenever possible.

Learn more about web security, DPoP, and OAuth 2.0

In this post, you applied DPoP to a SPA and inspected DPoP in action. I hope you enjoyed it! If you want to learn more about the ways you can incorporate authentication and authorization security in your apps, you might want to check out these resources:

OAuth 2.0 and OpenID Connect overview The Identity of OAuth Public Clients Add Step-up Authentication Using Angular and NestJS Configure OAuth 2.0 Demonstrating Proof-of-Possession

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Monday, 09. September 2024

liminal (was OWI)

Link Index for Customer Identity and Access Management

The post Link Index for Customer Identity and Access Management appeared first on Liminal.co.

Finicity

FinovateFall 2024: Open banking and AI set the stage for financial innovation 

When banking, fintech and finance leaders gather in New York at one of the leading fintech conferences, FinovateFall, on September 9-11, two broad topics will dominate the agenda: how new… The post FinovateFall 2024: Open banking and AI set the stage for financial innovation  appeared first on Finicity.

When banking, fintech and finance leaders gather in New York at one of the leading fintech conferences, FinovateFall, on September 9-11, two broad topics will dominate the agenda: how new regulations and the proliferation of behavioral data is enabling the age of open banking, and how artificial intelligence (AI) and machine learning can accelerate new product development, improve the customer experience and boost profits. 

Just as we expect streaming entertainment apps to offer us personalized choices, consumers and businesses today demand more digital, personalized services from their financial institutions. For decades, banks and financial institutions operated on closed ecosystems: in-person relationships were key, data was sequestered in core banking and card systems, and third-party data came from credit bureaus.  

That’s been changing recently as more businesses and consumers embrace open banking, both in response to fintech innovation and evolving data and privacy regulations. Today, application programming interfaces (APIs) enable third parties to offer services that complement bank services. In addition, new rules give consumers more control over their data and its use. These circumstances are combining to fuel a revolution in financial services

A critical topic at FinovateFall will be how financial institutions can adapt to new Consumer Financial Protection Bureau (CFPB) rules, expected to be finalized in the coming months. The new regulations will formally establish the U.S. rules for open banking. Mastercard’s Head of Data Access and Business Development for Open Banking Ben Soccorsy will speak about how all this paves the way for a bold open banking future, discussing the opportunities posed by the new rules and how banks should address them to become a data recipient,  enhance customer experience, drive innovation and, ultimately, boost profits. 

New research emphasizes the importance of open banking  

Both businesses and consumers have welcomed open banking. According to a forthcoming global Mastercard research report set to be published in September 2024, embracing open banking will be crucial to both business-to-business partnerships and maintaining consumer relationships. Among B2B survey respondents, 92% said using AI to safeguard consumer data and streamline processes is an important consideration when selecting open banking partners. Businesses also hope that open banking can improve their profitability (69%), boost their revenue (66%) and increase productivity/efficiency (65%). 

Mastercard’s Senior Vice President for Open Banking Network Services Ryan Beaudry also speaks at FinovateFall, discussing how AI and machine learning can improve such things as account-to-account payments. That’s crucial because 80% of U.S. consumers already link their financial accounts and 66% are likely to connect their bank accounts to an app or service in the future, according to the 2024 Mastercard survey.  

The same survey also found that how financial institutions handle data and open banking is important to consumers. Indeed, many of the features that attract U.S. consumers to engage with a financial services company—efficiency, convenience, security and privacy—are driving open banking innovations.  

Asked to name the top considerations when choosing which financial institutions to do business with, more than 90% of consumers said their top four priorities were: keeping their data secure, a convenient customer experience, greater control over how their data is used, and the ability to process transactions quickly.  

Once again, FinovateFall brings together thousands of senior decision-makers from financial institutions, fintechs and the investing community. With consumers and businesses becoming more digitally savvy and hungry for new innovations in how they interact with their finances, start-ups and public companies alike will show off their latest products and innovations.  

As keynote speaker and customer experience strategist Ken Hughes said ahead of the conference, “We are in a perfect storm of change, and we need to ensure that the financial services of today are fit for the customer of tomorrow.” 

If you’re at FinovateFall yourself, make sure to meet up with our open banking experts or reach out to them directly with any questions about your open banking opportunities. You can also visit our home for everything open banking and deep dive into some of our inspirational use cases.  

CFPB Guide CFPB Compliance Account Opening Payment Enablement Business Solutions

The post FinovateFall 2024: Open banking and AI set the stage for financial innovation  appeared first on Finicity.


Ontology

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital…

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital: Transparency Mark Cuban recently put out a challenge: he wants Trump supporters to name any startups backed by the former president that don’t involve a member of his family. This seemingly simple call-out actually exposes a far deeper issue in venture capital — one that could be solved through the power of
Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital: Transparency

Mark Cuban recently put out a challenge: he wants Trump supporters to name any startups backed by the former president that don’t involve a member of his family. This seemingly simple call-out actually exposes a far deeper issue in venture capital — one that could be solved through the power of blockchain and decentralized identities. And it’s about time someone connected the dots.

Think about it — venture capital is notoriously opaque. Most of the time, we have no idea which startups are getting funded, why certain VCs back certain founders, and what skeletons are hiding in the closets of high-profile investors. Even if someone like Trump has a rocky investment history, there’s no easy way to track it. Cuban’s challenge brings that to the forefront. If no one can name a successful Trump-backed startup, doesn’t that say something about how easily reputations in venture capital can be manipulated or shielded from scrutiny?

Now, let’s take this to the next level. What if we could bring all this on-chain? What if every venture capitalist’s track record — every investment, successful or otherwise — was tied to their decentralized identity and available for anyone to audit? Imagine a world where the power of blockchain is leveraged to not just remove middlemen, but to remove the smoke and mirrors surrounding investor reputations. Every deal, every failure, every win would be part of a permanent, transparent ledger. No more guesswork. No more empty claims. No more hiding behind family names or closed-door deals.

This concept is rooted in the heart of what Web3 promises: transparency, trust, and the ability for people to control their own data. By connecting VC histories to decentralized identities, startups would have a new tool in their arsenal — a way to verify the legitimacy and reliability of their potential investors. The days of VCs backing founders for a quick PR boost, only to ghost them when things get tough, would be over. It would empower the startup ecosystem with verifiable truth, and most importantly, accountability.

Let’s be real — venture capital needs this kind of overhaul. The recent scandals involving bad actors like Adam Neumann or the fallout from WeWork’s botched IPO are just reminders of the shady side of this industry. And don’t get me started on the “fake it till you make it” culture rampant in Silicon Valley, where founders and investors alike build smoke screens rather than sustainable businesses.In the future, blockchain and decentralized identities could make this all a thing of the past. And Ontology is leading the charge with its Decentralized Identity technology, which has the potential to create a new level of trust in these opaque markets. By offering zero-knowledge proofs and decentralized reputation systems, Ontology allows users to maintain privacy while still proving credibility. This is the solution that venture capital — and, frankly, business at large — has been waiting for.

Mark Cuban’s call for proof of Trump-backed startups may have been a jab, but it highlights something much more important. The VC world needs more transparency. Trump’s vague business reputation is just one example of how easily information can be spun, hidden, or hyped. With decentralized identity systems and reputation on-chain, we’d never have to ask these questions again. We’d know, without a doubt, who’s actually worth their salt.As we continue to develop Web3 technologies, let’s push for a world where investor reputations and venture capital histories are public, verifiable, and untouchable by spin. It’s time for the truth to come on-chain.

Interested in learning more about decentralized identities and how they can revolutionize transparency in venture capital? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital… was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Nov 07, 2024: Overcoming the Challenges of MFA and a Passwordless Future

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.
Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.

Oct 09, 2024: Adopting Passwordless Authentication

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.
As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.

Sunday, 08. September 2024

KuppingerCole

Now or Never: Successful Transition From SAP Identity Management

SAP has announced the end of life for its identity management (IDM) system, which is a key component in many traditional SAP environments. This poses a challenge for organizations running on-premises SAP systems. To plan for a smooth transition, organizations should consider key strategies such as taking the time for thorough planning, thinking about the future of their IAM, and analyzing requirem

SAP has announced the end of life for its identity management (IDM) system, which is a key component in many traditional SAP environments. This poses a challenge for organizations running on-premises SAP systems. To plan for a smooth transition, organizations should consider key strategies such as taking the time for thorough planning, thinking about the future of their IAM, and analyzing requirements before choosing a new solution.

The cost of implementation projects can be significant, but investing in proper preparation and tools upfront can save time and money in the long run. It is important to take a holistic view and consider the broader picture, including GRC and access governance solutions. Finding the right solution requires support from experts who understand the market and the organization's specific requirements.



Friday, 06. September 2024

Extrimian

DIDcon: Advances in Self-Sovereign Identity in Latin America

Introduction: DIDcon Identity Day The first edition of DIDcon gathered experts from various fields in Buenos Aires, Argentina, to explore how decentralized identity technology enhances security, privacy, and data interoperability in an increasingly digitalized world. Table of Contents What is Decentralized Identity? Self-Sovereign Identity (SSI) redefines the concept of digital identity by managin
Introduction: DIDcon Identity Day

The first edition of DIDcon gathered experts from various fields in Buenos Aires, Argentina, to explore how decentralized identity technology enhances security, privacy, and data interoperability in an increasingly digitalized world.

Table of Contents What is Decentralized Identity? Summary of Talks at DIDcon Welcome and Introduction Security and Decentralization The Future of Identity Trust Ecosystems: Use Cases Conclusion What is Decentralized Identity?

Self-Sovereign Identity (SSI) redefines the concept of digital identity by managing and storing information in a decentralized manner, using technologies like blockchain. This model allows individuals to control their personal information without relying on centralized intermediaries, significantly improving data security and privacy.

Summary of Talks at DIDcon Welcome and Introduction

https://www.os.city/Jesús Cepeda, CEO and co-founder of OS City, and Diego Fernández, Secretary of Innovation and Digital Transformation of GCBA, opened the event by emphasizing decentralized identity as an essential tool that returns control of information to users. They highlighted how this technology unlocks global economic potential and combats cybercrime and the frictions of less intuitive solutions. They also pointed to QuarkID as an innovative example of how Latin America is implementing decentralized identity to enhance citizen security and privacy.

Security and Decentralization

In this talk moderated by Alfonso Campenni, Pablo Sabbatella, security researcher at SEAL and founder of Defy Education, emphasized how scams and cybercrimes have become more sophisticated. To combat this, he discussed how decentralization is an interesting path that strengthens the protection and security of information.

During the recent digital security panel, Pablo Sabbatella, an expert in the field, shared valuable recommendations for protecting our identities and data online. He stressed the importance of adopting safe practices in the digital age, especially in the context of increasing cyberattacks and vulnerabilities in the applications we use daily.

Main Security Recommendations by Pablo Sabbatella: Avoid Repeating Passwords: It’s crucial to have unique passwords for each service to prevent cross-access in case of data breaches. Use Two-Factor Authentication (2FA): Adding a second level of security is crucial. It is recommended to use code-generating apps instead of SMS or emails, which are less secure. Be Cautious with Personal Data: It is vital to limit the personal information shared online and in applications, especially the phone number, which is a sensitive piece of data. Avoid Downloading Pirated Software: Unofficial programs and applications can contain malware and seriously compromise personal and financial security.

These guidelines not only increase individual security but also foster a culture of awareness about online safety, which is essential for navigating safely in today’s digital world.

He also mentioned new standards being built for the implementation of Account Abstraction through smart contracts, which enhance key management and user experience.

The Future of Identity

In a panel moderated by Pablo Mosquella of Extrimian, experts such as Guillermo Villanueva, CEO and co-founder of Extrimian, Matthias Broner, Head of Growth LATAM at ZKsync, Mateo Sauton from Worldcoin, and Pedro Alessandri, Undersecretary of Smart City, debated how decentralized identity is transforming the digital landscape, creating a safer, more private, scalable, and interoperable environment. They also discussed the positive impact of QuarkID and its rapid expansion across Latin America, underscoring its potential to strengthen digital trust in the region.

Trust Ecosystems: Use Cases

In this session moderated by Lucas Jolías from OS City and Fabio Budris, Advisor to the Secretary of Innovation of the City of Buenos Aires, concrete use cases of decentralized identity were presented in managing procedures in Salta, at the National Technological University (UTN), and in pilot tests for organ transplant management at INCUCAI. These examples clearly illustrated the tangible impact of these technologies in key sectors such as government, education, and health.

Conclusion

DIDcon – Identity Day underscored the transformative power of Decentralized Identity to revolutionize society and maximize value in the physical, digital, and hybrid worlds. Initiatives like QuarkID are driving Latin America toward a more secure and reliable digital future, overcoming barriers that have historically limited its technological potential.

The adoption of these technologies not only promises to improve security and privacy but is also building a solid digital trust ecosystem that will bring significant benefits to all the involved countries.

Keywords: decentralization, SSI, DID, VC, QuarkID, Extrimian, blockchain, trust, security, privacy, interoperability, technology, digital identity.

The post DIDcon: Advances in Self-Sovereign Identity in Latin America first appeared on Extrimian.


Tokeny Solutions

Amsterdam Teambuilding Fuels Our Mission for Open Finance

The post Amsterdam Teambuilding Fuels Our Mission for Open Finance appeared first on Tokeny.
May 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance

Greetings from Amsterdam! We hope you had a wonderful summer holiday.

Recently, our global team gathered in this dynamic city, not just to build a stronger bond, but to align our vision and drive our mission forward. As we explored the charming streets and iconic canals, we strengthened our commitment to transforming finance.

Our Vision: We see a future where finance is modern, efficient, and accessible—where assets move as quickly as texts, transfers are instant, and cross-platform interactions are seamless. This is the promise of open finance, built on DLT infrastructures.

Our Mission: Our mission is to empower institutions with a no-code solution T-REX Platform and proven API solution T-REX Engine, along with our expertise and ecosystem, to upgrade to open finance seamlessly.

The Market’s Moment: We are at a crucial point in the journey of tokenization. According to Gartner’s latest report, tokenization is currently in the ‘trough of disillusionment’ stage, with mainstream adoption expected in the next 2-5 years. For institutions, this means there is a narrow window of opportunity. To be fully prepared for the market shift (when more assets will be tokenized than paper-based assets), in just two years, institutions must begin building their tokenization capabilities now.

This requires a proactive transformation of operational models, including the integration of a robust onchain operating system. Solutions like those offered by Tokeny can play a critical role in facilitating this transformation.

The risk of inaction is significant. Institutions that delay will struggle to keep up, risking their market position and potentially losing clients to more forward-thinking competitors. The time to act is now, or risk being left behind as the market rapidly evolves.

Our Growth: Our team is expanding fast, with talented new members from Luxembourg, Madrid, Bangkok, Paris, Sarajevo, Zaragoza, and Barcelona. Each of them brings fresh perspectives and skills to deliver products that address our partners’ needs to maintain a competitive advantage with our solutions.

The Road Ahead: With over 120 successful use cases globally, 42 talented builders, and more than 3 billion blockchain events happened on our platform, we are ready to make history together with you. Our time in Amsterdam has made us stronger, more aligned, and more motivated than ever to make open finance a reality. The future is limitless.

We’ve also updated our vision and mission on our landing page to reflect our journey over the past 7 years. You can check it out here.

Tokeny Spotlight

PARTNERSHIP

ShipFinex and Tokeny revolutionize maritime asset tokenization.

Read More

CONTRIBUTION

CCO, Daniel Coheur, contributed to Zodia Custody’s report.

Read More

TEAM DAY

Our global team came together in the “gezellige” city of Amsterdam.

Read More

MILESTONE

We celebrate our LinkedIn page has reached 10,000 followers!

Read More

PRODUCT NEWSLETTER

We dive into demand for onchain services and why API’s are the key to lead.

Read More

INATBA

Discover our contribution to the tokenization section of the recent INATBA report.

Read More Tokeny Events

Token2049
September 18th-19th, 2024 | 🇸🇬 Singapore

Register Now

Mainnet
September 30th-October 2nd, 2024 | 🇺🇸 USA

Register Now

European Blockchain Convention
September 25th-26th, 2024 | 🇪🇸 Spain

Register Now

DAW London
October 2nd-3rd, 2024 | 🇬🇧 United Kingdom

Register Now ERC3643 Association Recap

 Bounty Challenge

Zama is organzaing another bounty challenge! To create a unique and confidential variant of the ERC-3643 security token standard using Zama’s fhEVM.

Read more

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Sep6 Amsterdam Teambuilding Fuels Our Mission for Open Finance May 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance Greetings from Amsterdam! We hope you had a wonderful summer holiday. Recently, our global team… Aug1 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption July 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption Open finance is a new approach to financial services, characterized by decentralization, open… Jun28 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules June 2024 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules As the EU’s Markets in Crypto Assets (MiCA) regulation is set… May22 Institutional RWA Tokenization Needs Permissioned Cash Coins May 2024 Institutional RWA Tokenization Needs Permissioned Cash Coins Stablecoins are the killer use cases for the crypto space, with a market cap exceeding $160…

The post Amsterdam Teambuilding Fuels Our Mission for Open Finance appeared first on Tokeny.


PingTalk

Policy Based Access Control (PBAC) Explained

Discover how Policy Based Access Control (PBAC) works, its benefits, and implementation steps tailored for financial services.

Traditional access control methods, such as role-based access control (RBAC) and attribute-based access control (ABAC), have built the foundation for securing systems and managing user access. 

 

However, they fail to provide the flexibility and enhanced security needed in today’s dynamic environment–especially for the financial services industry. As organizations navigate stringent compliance requirements and evolving security threats, they need a better alternative to make dynamic, context-aware access decisions–like policy-based access control (PBAC). 

 

Below, we’ll explore PBAC in further detail, how it compares to other models, and how it benefits the financial services industry.

Thursday, 05. September 2024

IdRamp

Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company

Account takeover (ATO) attacks have become a sophisticated and pervasive threat, with criminal organizations targeting businesses of all sizes and types. By gaining unauthorized access to company accounts, attackers can disrupt operations, steal sensitive data, and damage a company’s reputation. The post Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company first appeared on I

Account takeover (ATO) attacks have become a sophisticated and pervasive threat, with criminal organizations targeting businesses of all sizes and types. By gaining unauthorized access to company accounts, attackers can disrupt operations, steal sensitive data, and damage a company’s reputation.

The post Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company first appeared on Identity Verification Orchestration.

KuppingerCole

Authenticating Identities in the Age of AI: Strategies for Trustworthy Verification

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With these innovations, organizations can navigate identity verification confidently.

Join identity experts from KuppingerCole Analysts and InverID as they explore the pivotal role of chipped identity documents in reliable verification and their integration into eIDAS 2.0-compliant identity wallets. Discover strategies for establishing trust amidst faux realities, ensuring the integrity of digital identities.

Annie Bailey, Research Strategy Director at KuppingerCole Analysts, will discuss the implications of eIDAS 2.0 legislation and its impact on identity management. She will explain the concept of reusable verified identities and their significance in a multi-wallet ecosystem, as well as offer insights into preparing for a future with diverse credentials and the challenges it presents.

Wil Janssen, Co-founder and CRO of InverID, will explain the critical need for remote identity verification in today's digital landscape. He will illustrate how to leverage government-issued identity documents for secure verification, as well as highlight the importance of identity verification services in EU Wallets and beyond.




auth0

External User Verification with Forms

Learn how to leverage Auth0 Forms to implement an invitation code workflow and improve the onboarding of your SaaS users.
Learn how to leverage Auth0 Forms to implement an invitation code workflow and improve the onboarding of your SaaS users.

Evernym

Ensuring Compliance with Regulatory Requirements in Digital Security

Ensuring Compliance with Regulatory Requirements in Digital Security In an increasingly regulated world, ensuring compliance with... The post Ensuring Compliance with Regulatory Requirements in Digital Security appeared first on Evernym.

Ensuring Compliance with Regulatory Requirements in Digital Security In an increasingly regulated world, ensuring compliance with digital security requirements is crucial for organizations of all sizes. Regulations and standards are designed to protect sensitive data, ensure privacy, and enhance the overall security of digital systems. However, navigating these requirements can be ...

The post Ensuring Compliance with Regulatory Requirements in Digital Security appeared first on Evernym.


Elliptic

Crypto regulatory affairs: Hong Kong kicks off tokenization sandbox with major institutional players

Hong Kong has taken yet another important step to bolster its position as a leader in the Asia-Pacific region for well-regulated cryptoasset and blockchain innovation. 

Hong Kong has taken yet another important step to bolster its position as a leader in the Asia-Pacific region for well-regulated cryptoasset and blockchain innovation. 


Okta

Elevate Access Token Security by Demonstrating Proof-of-Possession

We use access tokens to request data and perform actions within our software systems. The client application sends a bearer token to the resource server. The resource server checks the validity of the access token before acting upon the HTTP request. What happens if the requesting party is malicious, steals your token, and makes a fraudulent API call? Would the resource server honor the HTTP reque

We use access tokens to request data and perform actions within our software systems. The client application sends a bearer token to the resource server. The resource server checks the validity of the access token before acting upon the HTTP request. What happens if the requesting party is malicious, steals your token, and makes a fraudulent API call? Would the resource server honor the HTTP request? If you use a bearer token, the answer is “yes.”

My teammate wrote that an access token is like a hotel room keycard. If you have a valid keycard, anyone can use it to access the room. If you have a valid access token, anyone can use it to access a resource server.

7 Ways an OAuth Access Token is like a Hotel Key Card

Learn 7 things OAuth 2.0 access tokens have in common with a hotel key card.

Aaron Parecki

Bearer tokens (and static API keys) mean whoever presents the valid token to the resource server has access, which makes the token powerful and vulnerable. We can look at high-profile token thefts to see how prevalent and disastrous token theft is, so we want to ensure our applications aren’t vulnerable to similar attacks.

To protect tokens, we incorporate secure coding techniques into our apps, configure a quick expiration time on the token, and ensure only requests sent to allowed origins include the access token. Still, token attacks pose a risk to highly sensitive resources. What more can we do to secure requests?

This post describes a new OAuth 2.0 spec supported by Okta that makes access tokens less prone to misuse and helps mitigate security risks. If you want to refresh your OAuth knowledge, check out What the heck is OAuth.

Table of Contents

Bind OAuth 2.0 access tokens to client applications Demonstrate proof of possession (DPoP) using JWTs Incorporating DPoP into OAuth 2.0 token requests Use DPoP-bound access tokens in HTTP requests Extend the DPoP flow with an enhanced security handshake Validate DPoP requests in the resource server Learn more about OAuth 2.0, Demonstrating Proof-of-Possession, and secure token practices Bind OAuth 2.0 access tokens to client applications

If we go back to the hotel keycard analogy, we want a hotel keycard that only you can use and that links you as the rightful user of the hotel keycard.

In the OAuth world, ideally, we want to link the authorization server, the client, and the access token and limit token use to the client. In OAuth terminology, the sender and client application are the same entity. By linking these entities, external parties can’t misuse the access token.

OAuth 2.0 defines a few methods to bind access tokens.

🤐 Client secret
Confidential clients are applications running in a protected environment where user authentication and token storage occur within backend servers, such as traditional server-rendered web applications. Confidential clients can use a secret value known to the requestor (the client application requesting the tokens) and the authorization server as part of HTTP requests. The client secret is a long-lived value generated by the authorization server. However, malicious parties who steal the secret can use it. 🌐 Mutual TLS Client Authentication and Certificate-Bound Access Tokens (mTLS)
Mutual authentication means parties at the ends of the network connection identify themselves using a combination of asymmetric encryption and TLS certificate as part of the HTTP request. mTLS is a highly secure method for confidential clients but can be complex to implement and maintain. 🔒 Private key JSON Web Token (JWT)
Machine-to-machine HTTP requests don’t have user context. The requesting service often uses a combination of an ID and secret using the Basic authorization scheme when making HTTP calls, but doing so isn’t secure. Private key JWTs offer a more secure approach. The requesting service uses asymmetric encryption to sign any JWTs it creates.

These methods apply only to confidential clients that can maintain secrets, not to public clients.

Public clients are apps that run authentication code within the user’s hardware, such as in Single-Page Applications (SPA) and mobile clients. Software applications use public client architecture but contain avenues for token security exploits without careful protection. Is there an alternative that works for confidential and public clients without incurring costly implementation and maintenance?

Demonstrate proof of possession (DPoP) using JWTs

There’s now a solution for all client types calling sensitive resources! The IETF published a new extension to OAuth 2.0: Demonstrating Proof of Possession (DPoP), targeted primarily for public client use. You may have heard of this idea before, as the concept has been around for a while. With a published spec, it’s now official, standardized, and supported!

The client and authorization server work together to generate tokens with proof of possession.

The client creates non-repudiable proof of ownership using asymmetric encryption The authorization server uses this proof when generating the token

How is this different from earlier methods that bind the caller to the access token? The big difference is this method happens at runtime across any client type. Confidential clients have cryptographic libraries supporting public/private key encryption, but a gap exists for public clients. Thanks to enhanced browser API capabilities such as the Web Crypto API and SubtleCrypto, modern browser-based JavaScript apps can also use DPoP.

🚨 You must protect the client from Cross-Site Scripting (XSS) and Remote File Inclusion (RFI) attacks to prevent exfiltration or unauthorized use of the keyset. 🚨

Store the keys in a storage format that someone can’t export and guard the app against attacks where an attacker’s code can run in the user’s context. Use up-to-date secure SPA frameworks, employ defensive coding practices, and add appropriate Content Security Policies (CSP) to protect the client. Apply secure header best practices and consider using the Trusted Types API if you can limit end-user browser usage to browsers that support it.

⚠️ Note

We will investigate DPoP proofs and inspect how the client constructs them. However, despite this knowledge, you should always use Okta SDKs or a vetted, well-maintained library with built-in DPoP support when making requests using DPoP.

Incorporating DPoP into OAuth 2.0 token requests

When using DPoP, the client creates a “proof” using asymmetric encryption. The proof is a JWT, which includes the URI, the HTTP method of the request, and the public key. The client application requests tokens from the authorization server and includes the proof as part of the request. The authorization server binds a public key hash and the HTTP request information from the proof within the access token it returns to the client. This means the access token is only valid for the specific HTTP request.

A sequence diagram for the OAuth 2.0 Authorization Code flow with DPoP looks like this:

The proof contains metadata proving the sender and ways to limit unauthorized use by limiting the HTTP request, the validity window, and reuse. If you inspect a decoded DPoP proof JWT, you’ll see the header contains information proving the sender:

The typ claim set to dpop+jwt The public/private key encryption algorithm The public key in JSON Web Key (JWK) format

Inspecting the decoded proof’s payload shows claims that limit unauthorized use, such as:

HTTP request info including the URI and HTTP method (such as /oauth2/v1/token and POST) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks

Let’s inspect the /token request a little further. When making the request, the client adds the proof in the header. The rest of the request, including the grant type and the code itself, remains the same for the Authorization Code flow.

POST /oauth2/v1/token HTTP/1.1 DPoP: eyJ0eXAiOiJkcG9w.....H8-u9gaK2-oIj8ipg Accept: application/json Content-Type: application/x-www-form-urlencoded grant_type=authorization_code code=XGa_U6toXP0Rvc.....SnHO6bxX0ikK1ss-nA

The authorization server decodes the proof and incorporates properties from the JWT into the access token. The authorization server responds to the /token request with the token and explicitly sets the response header to state the token type as DPoP.

HTTP/1.1 200 OK Content-Type: application/json { "access_token":"eyJhbG1NiIsPOk.....6yJV_adQssw5c", "token_type":"DPoP", "expires_in":3600, "refresh_token":"5PybPBQRBKy2cwbPtko0aqiX" }

You now have a DPoP type access token with a possession proof. What changes when requesting resources?

Use DPoP-bound access tokens in HTTP requests

DPoP tokens are no longer bearer tokens; the token is now “sender-constrained.” The sender, the client application calling the resource server, must have both the access token and a valid proof, which requires the private key held by the client. This means malicious sorts need both pieces of information to impersonate calls into the server. The spec builds in constraints even if a malicious sort steals the token and the proof. The proof limits the call to a unique request for the URI and method within a validity window. Plus, your application system still has the defensive web security measures applicable to all web apps, preventing the leaking of sensitive data such as tokens and keysets.

The client generates a new proof for each HTTP request and adds a new property, a hash of the access token. The hash further binds the proof to the access token itself, adding another layer of sender constraint. The proof’s payload now includes:

HTTP request info including the URI and HTTP method (such as https://{yourResourceServer}/resource and GET) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks Hash of the access token

Clients request resources by sending the access token in the Authorization header, along with proof demonstrating they’re the legitimate holders of the access token to resource servers using a new scheme, DPoP. HTTP requests to the resource server change to

GET https://{yourResourceServer}/resource HTTP/1.1 Accept: application/json Authorization: DPop eyJhbG1NiIsPOk.....6yJV_adQssw5c DPoP: eyJhbGciOiJIUzI1.....-DZQ1NI8V-OG4g

The resource server verifies the validity of the access token and the proof before responding with the requested resource.

Extend the DPoP flow with an enhanced security handshake

DPoP optionally defines an enhanced handshake mechanism for calls requiring extra security measures. The client could sneakily create proofs for future use by setting the issued time in advance, but the authorization and resource servers can wield their weapon, the nonce. The nonce is an opaque value the server creates to limit the request’s lifetime. If the client makes a high-security request, the authorization or resource server may issue a nonce that the client incorporates within the proof. Doing so binds the specific request and time of the request to the server.

An example of a highly secure request is when making the initial token request. Okta follows this pattern. Different industries may apply guidance and rules for the types of resource server requests requiring a nonce. Since the enhancement requires an extra HTTP request, use it minimally.

When the authorization server’s /token request requires a nonce, the server rejects the request and returns an error. The response includes a new header type, DPoP-Nonce, with the nonce value, and a new standard error message, use_dpop_nonce. The flow for requesting tokens now looks like this:

Let’s look at the HTTP response from the authorization and resource servers requiring a nonce. The authorization server responds to the initial token request with a 400 Bad Request and the needed nonce and error information.

HTTP/1.1 400 Bad Request DPoP-Nonce: server-generated-nonce-value { "error": "use_dpop_nonce", "error_description": "Authorization server requires nonce in DPoP proof" }

When the resource server requires a nonce, the response changes. The resource server returns a 401 Unauthorized with the DPoP-Nonce header and a WWW-Authenticate header containing the use_dpop_nonce error message.

HTTP/1.1 401 Unauthorized DPoP-Nonce: server-generated-nonce-value WWW-Authenticate: error="use_dpop_nonce", error_description="Resource server requires nonce in DPoP proof"

We want that resource, so it’s time for a new proof! The client reacts to the error and generates a new proof with the following info in the payload:

HTTP request info including the URI and HTTP method (such as https://{yourResourceServer}/resource and GET) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks The server-provided nonce value Hash of the access token

With this new proof, the client can remake the request.

Validate DPoP requests in the resource server

Okta’s API resources support DPoP-enabled requests. If you want to add DPoP support to your own resource server, you must validate the request. You’ll decode the proof to verify the properties in the header and payload sections of the JWT. You’ll also need to verify properties within the access token. OAuth 2.0 access tokens can be opaque, so use your authorization server’s /introspect endpoint to get token properties. Okta’s API security guide, Configure OAuth 2.0 Demonstrating Proof-of-Possession has a step-by-step guide on validating DPoP tokens, but you should use a well-maintained and vetted OAuth 2.0 library to do this for you instead. Finally, enforce any application-defined access control measures before returning a response.

Learn more about OAuth 2.0, Demonstrating Proof-of-Possession, and secure token practices

I hope this intro to sender-constrained tokens is helpful and inspires you to use DPoP to elevate token security! Watch for more content about DPoP, including hands-on experimentation and code projects. If you found this post interesting, you may also like these resources:

Secure OAuth 2.0 Access Tokens with Proofs of Possession Why You Should Migrate to OAuth 2.0 From Static API Tokens How to Secure the SaaS Apps of the Future Step-up Authentication in Modern Application OAuth 2.0 Security Enhancements Add Step-up Authentication Using Angular and NestJS

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Wednesday, 04. September 2024

Spherical Cow Consulting

Why FIPS 140-3 Matters for Cryptography and Digital Identity Security

Cryptography is all about securing communications. Authentication, key exchange, token signing, digital signatures, zero-knowledge proofs, and so much more depend on cryptographic algorithms that no mere mortal (by which I mean me) will ever understand. The good news is that mere mortals do not need to understand these algorithms. Governments have the resources to truly… Continue reading Why FIPS

Cryptography is all about securing communications. Authentication, key exchange, token signing, digital signatures, zero-knowledge proofs, and so much more depend on cryptographic algorithms that no mere mortal (by which I mean me) will ever understand. The good news is that mere mortals do not need to understand these algorithms. Governments have the resources to truly dig into these algorithms and determine whether they are as secure and effective as intended. In the U.S., something called FIPS 140 sits at the heart of determining whether a cryptographic module—the actual hardware or software implementing these algorithms—is secure enough.

FIPS 140-3 is the latest iteration of the U.S. Federal Information Processing Standard (FIPS) that specifies the security requirements for cryptographic modules used by federal agencies and other organizations to protect sensitive information. If you have a cybersecurity company that does business with the U.S. Government, then you care about FIPS 140-3. If you don’t have a cybersecurity company but buy cybersecurity tools, knowing that the cryptographic modules they use to secure your data meet the FIPS 140-3 standards is a Very Good Thing.

If you aren’t involved in tech purchasing decisions for your company, this post will serve as interesting trivia for you to wow your geeky friends with over beverages. Apologies in advance for all the acronyms; they can’t be avoided if you’re in the world of tech.

Definitions

First, let’s get a few definitions out there:

Cryptography: Refers to the broader field of securing communications through mathematical techniques. Cryptographic Algorithm: A specific method or procedure, like AES or RSA, used within the field of cryptography to encrypt or decrypt data, sign messages, or generate keys. Cryptographic Module: A hardware or software component that implements cryptographic algorithms and provides secure services like encryption, decryption, authentication, or key management. FIPS 140

The first FIPS 140 was published thirty years ago (where has time gone???). The U.S. federal government realized it needed to get a handle on how the government as a whole needed to use cryptographic modules in its tech. Prior to that, it was something of a free-for-all. Each agency made its own decisions about what information and staff it had on hand. Not great.

The best thing about version 1 of anything is that it suddenly sparks all SORTS of discussion. There are new requirements, positive and negative feedback, and a desire to improve. That resulted in FIPS 140-2, published over 20 years ago in 2001. (I’m still feeling old here.) FIPS 140-2 provided clearer definitions and more detailed requirements. Just as well as the science of cryptography advanced and new cryptographic algorithms needed to be considered.

The U.S. Government obviously isn’t the only entity out there working out the best way to evaluate cryptographic algorithms. That’s where the International Organization for Standardization (ISO) came in. In 2012, ISO published ISO/IEC 19790:2012, “Information technology — Security techniques — Security requirements for cryptographic modules.” The U.S. National Institute of Standards and Technology (NIST) was a member of the team making that global standard. As it came time to yet again refresh FIPS 140, it made sense to point it to ISO/IEC 19790:2012. That’s now FIPS 140-3.

Cryptographic Module Validation Program (CVMP)

So now there’s a standard, updated over time, that says, “Here are the requirements for cryptographic modules to be used by the federal government.” Great! How does the government ensure that those modules meet those requirements? That’s where the Cryptographic Module Validation Program (CVMP) comes in.

The CVMP is a joint effort between the NIST and the Canadian Centre for Cyber Security. It provides guidelines for accredited laboratories (Cryptographic and Security Testing Laboratories (CSTL). From those guidelines, the laboratories verify that a cryptographic module submitted by a vendor satisfies the requirements. The CSTL’s findings are submitted back to the program. If everything is copacetic, the module is added to the list of modules federal agencies can accept in their tools and services.  

FIPS 140, the CVMP, and Digital Identity

So, how does this all tie into the world of digital identity? I have a list!

There are two things in particular to remember. First, of course, is noting that cryptography is used in a variety of ways when it comes to digital identity. Encrypting tokens, signatures, keys, and more is a fundamental necessity. Second, the federal government spends a mind-boggling amount on cybersecurity. This means their requirements for cybersecurity—such as the cryptographic modules used in the tools and services they purchase—influence almost everything in the cybersecurity industry. While following the FIPS 140 guidelines is only _required_ for federal agencies, in practice, its reach is much broader.

Given those points, FIPS 140-3 helps lay the groundwork for secure digital identity by ensuring that the cryptographic modules used are not just good, but government-approved good. And if that isn’t enough, given that FIPS 140-3 now basically points to an internationally developed standard in the form of ISO/IEC 19790:2012, then you’re talking about something that has achieved consensus on a global scale. That’s a level of assurance that goes beyond just checking a box. It’s knowing that the systems managing your identity are backed by some of the best cryptographic practices in the world.

Wrap Up

As a regular consumer, you really don’t need to know about FIPS 140 and its associated validation program. As a cybersecurity practitioner, you should at least be aware that it’s there and its implications. And as an executive that has responsibility for the security of your company or what goes into your products, all of this should be familiar to you already.

This is going to be an area I learn more about over the next few months. And since I learn best through writing, you can expect more blog posts on the topic of how the U.S. Government thinks about cryptographic modules. Stay tuned!

I want to help you go from overwhelmed at the rapid pace of change in identity-related standards to prepared to strategically invest in the critical standards for your business. Follow me on LinkedIn or reach out to discuss my Digital Identity Standards Development Services.

The post Why FIPS 140-3 Matters for Cryptography and Digital Identity Security appeared first on Spherical Cow Consulting.


KuppingerCole

Oct 15, 2024: A False Sense of Security: Authentication Myths That Put Your Company at Risk 

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.
In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.

Ontology

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms In today’s digital landscape, the rapid pace of technological innovation has brought us to a crossroads, where the ideals of privacy, autonomy, and freedom meet the very real challenges of regulation. While decentralized platforms promise a world free from the prying eyes of governments and corporations,
Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms

In today’s digital landscape, the rapid pace of technological innovation has brought us to a crossroads, where the ideals of privacy, autonomy, and freedom meet the very real challenges of regulation. While decentralized platforms promise a world free from the prying eyes of governments and corporations, they also pose significant challenges, particularly when they are used to facilitate illegal activities. Take, for example, the infamous cases of Silk Road, Tornado Cash, and Telegram — each a flashpoint in the ongoing battle between technological freedom and the need for regulation. But what if there were a way to strike a balance? A decentralized reputation system, paired with anonymous identities, could offer a middle ground, where freedom meets responsibility.

The Evolution of Privacy Platforms: Case Studies Silk Road: The Dark Web’s Pioneer

Silk Road was more than just an online black market; it was the first glimpse into a future where decentralized platforms could operate outside the reach of traditional law enforcement. Founded by Ross Ulbricht in 2011, Silk Road leveraged Bitcoin and the Tor network to create a truly global, anonymous marketplace. It was a hub for illegal activities — primarily drug trafficking — hidden from the watchful eyes of the law. The importance of Silk Road lies not just in its role as a market but in how it demonstrated the power of cryptocurrencies and decentralized platforms. It set a precedent, showing how these technologies could facilitate both freedom and crime on a massive scale.

Tornado Cash: Anonymizing Cryptocurrency Transactions

Tornado Cash pushed the boundaries of financial privacy. This cryptocurrency mixer on the Ethereum blockchain provided users with the tools to anonymize their transactions, protecting their financial data from surveillance. But with great power comes great responsibility — or, in this case, irresponsibility. Tornado Cash became a haven for money laundering, exploited by criminals and even North Korean hackers. The arrest of Tornado Cash developer Alexey Pertsev by Dutch authorities in August 2022 sparked a heated debate about the balance between privacy and security, and whether developers should be held accountable for the misuse of their creations.

Telegram: A Platform for Secure Communication

Telegram’s commitment to privacy and encryption has made it the go-to app for nearly 1 billion users seeking secure communication. From activists to journalists, many rely on Telegram to protect their privacy in the face of government surveillance. However, while Telegram is not decentralized, its strong encryption and anonymity features have also made it attractive to criminal organizations, coordinating everything from drug trafficking to child exploitation. The recent arrest of Telegram’s CEO, Pavel Durov, in France has intensified the debate about the role of tech platforms in moderating content and their accountability for illegal activities.

The Regulatory Response: Challenges and Consequences

The arrests of figures like Ulbricht, Pertsev, and Durov are part of a broader governmental push to regulate decentralized and privacy-focused platforms. But this raises some tough questions: Are we stifling innovation and free speech in the process? The legal complexities of regulating these platforms, especially when it comes to holding developers accountable, highlight the difficulty in balancing privacy with security.

Proposed Solution: Decentralized Identity and Reputation Systems

So, how do we move forward? One potential solution lies in the development of decentralized reputation systems paired with anonymous identities. Imagine a world where users can maintain their privacy while building a reputation based on their actions within the community. Such a system could empower communities to self-regulate, reducing the need for external oversight.

Anonymous Identity Systems

Anonymous identity systems could be the key to balancing privacy with accountability. These systems would allow users to engage with decentralized platforms without revealing their true identities, while still being held accountable for their actions.

Decentralized Reputation Systems

A decentralized reputation system could serve as a form of self-regulation. Users would build reputations based on their behavior, with ethical actions rewarded and illegal activities flagged or excluded. This could mitigate the need for heavy-handed regulation while preserving the core values of decentralization.

Practical Considerations and Challenges

Of course, implementing such systems won’t be without challenges. From technical limitations to potential exploitation, these solutions require careful design and community buy-in. But with transparency and engagement, we could create a system that balances freedom with responsibility.

Conclusion

The stories of Silk Road, Tornado Cash, and Telegram underscore the dual-edged sword of privacy-focused technology. While these platforms offer unprecedented privacy and autonomy, they also create new avenues for crime. A balanced approach, using decentralized reputation systems and anonymous identities, could offer a path forward. As we continue to navigate the digital age, it’s essential that we foster dialogue between innovators, regulators, and users to ensure that technology serves the greater good, protecting both freedom and security in this brave new world.

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Bem Vindos ao Bluesky!

Que semana! Nos últimos dias mais de 2.6 milhões de usuários se registraram na plataforma, sendo que mais de 85% são Brasileiros. Sejam muito bem vindos, estamos muito contentes por tê-los aqui!

Que semana! Nos últimos dias mais de 2.6 milhões de usuários se registraram na plataforma, sendo que mais de 85% são Brasileiros. Sejam muito bem vindos, estamos muito contentes por tê-los aqui!

Qual o diferencial do Bluesky?

Por base, o Bluesky te prioriza e te dá mais controle. Aqui você pode escolher a experiência social que mais te agrada.

Nossa comunidade cresceu organicamente e está cheia de autores, artistas, jornalistas, políticos, entre outros. Os usuários brasileiros que já usam a plataforma notam que eles têm uma qualidade de engajamento com muito mais qualidade do que em qualquer outra plataforma.

Além disso, o Bluesky é um ecossistema aberto. Nós criamos uma rede social aberta para que qualquer desenvolvedor seja capaz de modificá-la através do AT Protocol (o nome dele é Atmosphere). Essa abertura diz respeito ao fato do Bluesky ser um projeto colaborativo, diferente de outras redes sociais que são controladas por uma única empresa. Qualquer um pode construir feeds, moderar e até criar aplicativos completamente novos usando a nossa plataforma.

Quando o Bluesky vai liberar vídeos e trending topics?

Os vídeos já estarão disponíveis na nossa próxima grande atualização, e nós também já estamos trabalhando nos trending topics. Estamos ligados nos feedbacks de vocês e super contentes com essa animação.

Quais são as particularidades do Bluesky? Feeds Customizados

Além do cronológico feed Seguindo e o clássico Discover, você pode experimentar feeds novos! Por exemplo, caso você queira ver postagens dos seus amigos que não postam muito — tente Quiet Posters. Caso queira ver o conteúdo mais postado no dia anterior em toda a plataforma, experimente o Catch Up.

Qualquer um pode criar e se inscrever nos feeds. Ao invés de providenciarmos apenas um algoritmo, nós deixamos nossos usuários escolherem. Você está no controle. Dessa forma a ideia é promover discussões mais saudáveis mesmo porque não incentivamos esquemas para aumentar engajamento, desinformação, fake news ou qualquer tipo de abuso.

Nomes de Usuário

Caso você seja dono de um site, pode usar o nome dele como nome de usuário. Por exemplo, a Folha de S. Paulo escolheu usar @folha.com como nome. Ah, mas não esqueça que você só pode usar o nome de usuário de um site que seja seu, pois essa é uma forma de mostrar que, por exemplo, você é a Folha de S. Paulo real. Esse é um jeito de se provar legítimo.

Você pode brincar e se divertir inventando! Por exemplo, muitas Swifties escolheram nomes de usuário que terminam com “swifties.social,” coisa que você pode configurar usando essa ferramenta aqui.

Caso tenha interesse em comprar e gerenciar um site através do nosso parceiro Namecheap, você pode fazer isso aqui.

Estou cansado de criar contas novas em redes sociais! É garantia que o Bluesky vai continuar vivo?

Nós sabemos, compreendemos profundamente essa sua preocupação. Mas o Bluesky está aqui para ficar.

Quando uma plataforma como o X fecha você perde contato com todos os seus amigos de lá. Mas como o Bluesky é uma rede cujo código é aberto você consegue levar seus seguidores com você. Dessa forma você sempre será capaz de manter contato com seus amigos. (caso esteja interessado em detalhes técnicos, tem mais informações sobre a portabilidade de contas aqui.)

E digo mais! Por ser uma rede social aberta, desenvolvedores independentes podem construir aplicativos inteiramente novos e promover outras experiências a vocês. Imagine uma plataforma de blog ou um aplicativo de fotos nessa mesma rede com todos os seus amigos já conectados. Você não vai precisar se inscrever em outro aplicativo social desta vez — estará criando uma identidade social online que é apenas sua.

Como o Bluesky lida com liberdade de expressão e moderação de conteúdo?

Segurança e promoção de espaços saudáveis para conversas é uma questão central para o Bluesky. Nosso time de moderação está de pé 24/7 e consegue responder a maioria das denúncias em poucos dias. Para denunciar uma postagem ou uma conta, simplesmente clique no menu indicado com três pontinhos e em “Denunciar Postagem” ou “Denúnciar Conta”.

Ao mesmo tempo, nós entendemos que não existe uma única forma que sirva para moderar todos os espaços. Então, além da base sólida em relação às políticas de moderação do Bluesky, vocês podem se inscrever em outras organizações que confiem, ou mesmo em outras comunidades que tenham algum conhecimento específico e que podem adicionar regras de moderação. (Leia mais sobre formas de acrescentar regras de moderação aqui.)

Como vocês planejam lidar com as fake news eleitorais?

Aaron Rodericks, o cabeça do time de segurança e promoção de saúde na plataforma, já teve que lidar com essas questões no Twitter e trouxe sua experiência para cá. Nosso time de moderação revisa o conteúdo ou a conta em busca de desinformação, coisa que os usuários podem denunciar diretamente do aplicativo. Em caso de violações severas como risco de boicotes a votação ou as eleições oficiais nós poderemos remover o conteúdo ou até a conta. Na maioria dos casos nós revisamos as reivindicações de que um conteúdo é falso buscando informações em fontes confiáveis e nos reservamos o direito de classificar postagens como desinformação.

Jornalistas podem entrar em contato através do press@blueskyweb.xyz. Para o nosso kit de media, que é onde você encontra nosso logo e fotos, clique aqui.


Welcome to Bluesky!

What a week! In the last few days, Bluesky has grown by more than 2.6 million users, over 85% of which are Brazilian. Welcome, we are so excited to have you here!

What a week! In the last few days, Bluesky has grown by more than 2.6 million users, over 85% of which are Brazilian. Welcome, we are so excited to have you here!

What makes Bluesky different?

By design, Bluesky gives users more control and prioritizes you. Here, you can customize your social experience to fit you.

Our community has grown organically, and is full of creators, artists, journalists, politicians, and more. Brazilian users on Bluesky have noticed that they receive much higher quality engagement on Bluesky than on any other platform.

In addition, Bluesky is an open ecosystem. We’re built on an open network that developers can freely build upon called the AT Protocol (and the ecosystem is called the Atmosphere). This openness means that Bluesky is a collaborative project, unlike other social networks that are controlled by a single company. Anyone can build feeds, moderation services, and even entirely new apps on top of our network.

What are some unique features on Bluesky? Custom Feeds

Outside of your chronological Following feed and the default Discover feed, you can try out some new feeds! Maybe you want to see posts from your friends who don’t post as often — try Quiet Posters. If you want to see the top posts across the whole network from the last day, try Catch Up.

Anyone can create and subscribe to feeds. Instead of providing only a single algorithm, we let users choose. You’re in control. This promotes healthier discussion because we do not incentivize engagement baiting, misinformation, or harassment.

Usernames

You can set your username to be a website that you own. For example, Folha de S. Paulo set their Bluesky username to @folha.com. You can only set your username to a website that you own, so this shows you that the real Folha de S. Paulo owns this account. It’s one form of self-verification.

There’s lots of room to have fun with this! For example, many Swifties are using usernames that end in “swifties.social,” which you can set up with this community tool here.

If you’d like to purchase and manage a website through Bluesky’s partnership with Namecheap, you can do that here.

I’m tired of creating accounts on new social apps! Will Bluesky stick around?

We know, we’ve been there too. Bluesky is here to stay.

When an app like X shuts down, you lose touch with all your friends there. But because Bluesky is built on an open network, you can easily take your followers with you. You will always be able to stay in touch with your friends. (If you’re interested in the technical details, you can read more about account portability here.)

Additionally, because of the open network, independent developers can build entirely new apps and experiences. Imagine a blogging platform or a photo app built on this same network, with all of your friends already connected. You’re not just signing up for another social app this time — you’re creating a social identity online that you own.

When will Bluesky have video and trending topics?

Video will be available in the next major app release, and we’re working on trending topics too. We’re paying close attention to your feedback and appreciate everyone’s excitement.

How does Bluesky handle content moderation?

Trust and safety is core to Bluesky, and we value spaces for healthy conversation. Our moderation team provides 24/7 coverage and responds to most reports within a few days. To report a post or an account, simply click the three-dot menu and click “Report post” or “Report account.”

At the same time, we recognize that there’s no one-size-fits-all approach to moderation. So, on top of Bluesky's strong foundation, users can subscribe to additional moderation decisions from more organizations they trust with industry-specific or community-specific knowledge. (Read more about our stackable approach to moderation here.)

What is your plan for election misinformation?

Aaron Rodericks, Bluesky's Head of Trust & Safety, formerly led election integrity efforts at Twitter and has brought his experience here. Our moderation team reviews content or accounts for misinformation, which users can report directly within the app. In the case of severe violations such as a risk to polling places or election officials, we may remove content or accounts. In most cases, we review claims against credible sources and fact checkers, and may label posts as misinformation.

Journalists can reach us with inquiries at press@blueskyweb.xyz. For our media kit, where you can find our logo and headshots, click here.

Tuesday, 03. September 2024

Microsoft Entra (Azure AD) Blog

MFA enforcement for Microsoft Entra admin center sign-in coming soon

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. In October 2024, Microsoft will begin enforcing mandatory multifactor authentication (MFA) for the Microsoft Entra admin center, Microsoft Azure portal, and the Microsoft Intune admin center.    We published a Message Center post (MC862873) to all

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. In October 2024, Microsoft will begin enforcing mandatory multifactor authentication (MFA) for the Microsoft Entra admin center, Microsoft Azure portal, and the Microsoft Intune admin center. 

 

We published a Message Center post (MC862873) to all Microsoft Entra ID customers in August. We’ve included it below:

 

Take action: Enable multifactor authentication for your tenant before October 15, 2024

 

Starting on or after October 15, 2024, to further increase your security, Microsoft will require admins to use multifactor authentication (MFA) when signing into the Microsoft Azure portal, Microsoft Entra admin center, and Microsoft Intune admin center. 

 

Note: This requirement will also apply to any services accessed through the Intune admin center, such as Windows 365 Cloud PC. To take advantage of the extra layer of protection MFA offers, we recommend enabling MFA as soon as possible. To learn more, review Planning for mandatory multifactor authentication for Azure and admin portals.

 

How this will affect your organization:

 

MFA will need to be enabled for your tenant to ensure admins are able to sign into the Azure portal, Microsoft Entra admin center, and Intune admin center after this change.

 

What to do to prepare:

If you have not already, set up MFA before October 15, 2024, to ensure your admins can access the Azure portal, Microsoft Entra admin center, and Intune admin center. If you are unable to set up MFA before this date, you can apply to postpone the enforcement date. If MFA has not been set up before the enforcement starts, admins will be prompted to register for MFA before they can access the Azure portal, Microsoft Entra admin center, or Intune admin center on their next sign-in. 

 

For more information, refer to: Planning for mandatory multifactor authentication for Azure and admin portals.

 

Jarred Boone

Senior Product Marketing Manager, Identity Security

 

 

Read more on this topic 

Planning for mandatory multifactor authentication for Azure and other administration portals  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

KuppingerCole

Passwordless Authentication for Enterprises

by Alejandro Leal Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

by Alejandro Leal

Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

Tokeny Solutions

ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization appeared first on Tokeny.

Luxembourg, 3rd September 2024 – ShipFinex, a leading innovator in maritime finance, and Tokeny, the pioneering onchain finance operating system for tokenized securities, are proud to announce a strategic partnership aimed at transforming the way maritime assets are tokenized and managed.

This collaboration brings together two industry pioneers with a shared vision of enhancing transparency, security, and compliance in the tokenization of maritime assets. By joining forces, ShipFinex and Tokeny are poised to set a new standard in the digital finance landscape, particularly within the multi-billion-dollar maritime sector.

Elevating Maritime Finance

Shipping and Maritime Finance have been an exclusive asset class/sector due to limited access in public equity markets and significant initial capital requirements to invest in assets, making it challenging for many to participate despite the Shipping market consistently outperforming many other asset classes.

ShipFinex and Tokeny are committed to democratizing access to maritime investments. Through this partnership, ShipFinex will leverage Tokeny’s cutting-edge technology to ensure that all tokenized maritime assets on its platform meet the highest standards of regulatory compliance and security, using the ERC-3643 standard. This integration not only enhances investor confidence but also positions both companies as leaders in the digital transformation of maritime finance.

Strategic Alignment

The partnership between ShipFinex and Tokeny is a strategic alignment that amplifies the strengths of both companies. ShipFinex’s expertise in maritime finance, combined with Tokeny’s proven track record in tokenized securities infrastructure, creates a powerful synergy that is expected to accelerate the growth and adoption of tokenized maritime assets globally.

Following ShipFinex’s recent announcement of receiving initial approval from VARA in the UAE, this partnership underscores the company’s commitment to adopting world-class solutions to enhance its platform’s security and compliance. This collaboration highlights robust infrastructure and innovative regulated approach underpinning ShipFinex’s operations.

Looking Ahead

This strategic partnership sets the stage for future growth and expansion, as both ShipFinex and Tokeny continue to innovate and lead in their respective fields. The integration of their capabilities will facilitate the broader adoption of tokenized maritime assets, offering investors a secure and efficient marketplace.

About ShipFinex

ShipFinex is revolutionizing maritime finance  by providing a secure, transparent, regulated and efficient marketplace for tokenized maritime assets, enabling global investors to access and trade these assets like never before.

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization appeared first on Tokeny.

Monday, 02. September 2024

Dock

Dock and cheqd Form Alliance to Accelerate Global Adoption of Decentralized ID

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance. By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance.

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Existing $DOCK tokens will be converted into $CHEQ tokens (pending governance approval from token holders in both communities). This will mark a new chapter of opportunity for our token holders who will benefit from all the Web3 resources cheqd has at their disposal. 

Full article: https://dock.io/post/dock-and-cheqd-form-alliance-to-accelerate-global-adoption-of-decentralized-id


KuppingerCole

SOAR Platforms and Generative AI: Building an AI-Skilled Workforce

by Alejandro Leal From Luddites to AI Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution. Throughou

by Alejandro Leal

From Luddites to AI

Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution.

Throughout history, workers have adapted to new technologies, from the complex machinery of the Industrial Revolution to today's sophisticated AI systems. Initially, industrial workers had to master mechanical operations to support mass production. Later, the digital revolution demanded proficiency with computers for a variety of tasks.

Now, the integration of AI in workplaces emphasizes skills in managing and leveraging intelligent systems to boost productivity and decision-making processes. This ongoing evolution demonstrates the need for continuous learning and adaptability, underscoring the increasing complexity of skills involved in today’s jobs.

The Evolving Role of Cybersecurity Analysts

Building an AI-skilled workforce requires not only equipping professionals with the tools and knowledge necessary to leverage AI technologies, but also addressing the persistent challenges of the human factor in cybersecurity by implementing the right tools, cultivating a cybersecurity culture, and fostering new skills.

For example, the art of prompt engineering is a relatively new and useful skill. This discipline allows analysts to develop and optimize prompts to use Large Language Models (LLMs) efficiently. These prompts are designed to optimize the language model's performance, ensuring that it produces the desired output with minimal computational resources. For security analysts, generative AI offers a remarkable leap forward in the effectiveness of their work.

The integration of generative AI into Security Orchestration, Automation, and Response (SOAR) platforms has the potential to change the role of Security Operations Centre (SOC) analysts. This technology automates routine tasks, allowing analysts to spend more time on strategic aspects of their roles, such as planning new defensive strategies, identifying emerging threats, and formulating proactive mitigation plans.

Balancing Innovation and Responsibility

However, the potential use of generative AI goes beyond simply automating tasks or interacting with a chatbot. For instance, SOC analysts can now use generative AI to craft detailed playbooks that document the steps taken during an incident response. This documentation process not only automates responses but also builds a knowledge base that can inform future responses.

SOC analysts can also use generative AI to create alerts and perform tasks such as threat detection, incident analysis, summarize events, create reports, enhance decision making, suggest playbook templates, etc. While the integration of generative AI into SOAR platforms offers substantial benefits, there are several challenges that need to be addressed.

Generative AI requires access to vast amounts of data to learn and make decisions. Ensuring that this data is handled securely and in compliance with privacy regulations is a significant challenge. In addition, there is a risk that AI models may develop biases based on the data they are trained on, which can lead to inaccurate or unfair outcomes.

Therefore, the use of generative AI must be accompanied by thorough quality control on the part of the vendor, to ensure that the information provided is indeed useful and accurate. This balanced approach reflects a careful consideration of both the opportunities and the complexities involved with integrating new technologies into security operations.

While some vendors are highly optimistic about the transformative potential of generative AI in SOAR solutions, others remain cautious, choosing to monitor the industry's development closely. These cautious vendors prioritize understanding how to align with customer expectations and carefully evaluate the practical advantages and potential challenges of implementing generative AI.

Great Expectations

By harnessing the potential of generative AI, however, SOC analysts can broaden their scope within cybersecurity practices, cultivating new knowledge and developing new skills.  While Ludd's reaction was to destroy the machines he feared would replace human craftsmanship, the challenge now is not to resist technological advancement, but to integrate it. This approach reflects a broader trend in AI development, where the goal is not to replace human endeavor, but to augment it.

As a result, vendors should prioritize transparency in their marketing to demonstrate the practical value of generative AI, rather than relying on hype or jargon. This approach not only educates customers about the capabilities and limitations of generative AI but also helps in setting realistic expectations. For more on this, see my colleague John Tolbert's blog post on Some Direction for AI/ML-ess Marketing.

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI

Generative AI in Cybersecurity – It's a Matter of Trust

ChatGPT for Cybersecurity - How Much Can We Trust Generative AI?

Asking Good Questions About AI Integration in Your Organization

Reflections & Predictions on the Future Use (and Mis-Use) of Generative AI in the Enterprise and Beyond


Passwordless Authentication for Enterprises

by Alejandro Leal This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication techn

by Alejandro Leal

This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication technologies and guides businesses in selecting the most effective solution to meet their security needs. By analyzing the market segment, vendor product and service functionality, relative market share, and innovative approaches, organizations can make informed decisions about their authentication strategies for their employees and systems.

Finema

This Month in Digital Identity — September Edition

This Month in Digital Identity — September Edition Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering: AI Enhancing Healthcare Fraud Prevention Artificial Intelligence (AI) is b
This Month in Digital Identity — September Edition

Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering:

AI Enhancing Healthcare Fraud Prevention

Artificial Intelligence (AI) is becoming a crucial tool in combating healthcare fraud by analyzing vast datasets in real-time to detect fraudulent activities, particularly through voice biometrics that verify patient identities and prevent unauthorized access to healthcare services. Additionally, there is a growing focus on enhancing patient experiences through digital trust technologies, such as secure digital signatures and messaging platforms, which protect patient data and streamline healthcare processes. Innovations like chip-based ID cards are also being adopted, as seen in Vietnam, to secure patient information and simplify access to healthcare services, reducing the risk of identity theft and fraud. These technological advancements collectively aim to strengthen the integrity of healthcare systems, safeguard patient data, and improve operational efficiency, ultimately enhancing the overall patient experience.

Somalia’s Financial Inclusion Drive

Somalia is advancing its digital transformation with a new Memorandum of Understanding (MoU) between the National Identification and Registration Authority (NIRA) and the Somali Banks Association (SBA) to drive financial inclusion through the national ID program. Launched a year ago, this program aims to provide the 18 million residents with a unified identity, facilitating access to banking services and aligning with global standards. The partnership seeks to enhance financial security, reduce fraud, and streamline banking processes by using the National Identification Number (NIN) for customer verification. This initiative is part of a broader effort to bolster the country’s economy, ensure compliance with international regulations, and increase public trust in financial institutions. The collaboration has been praised by key government figures and international partners, who see it as crucial for Somalia’s development. Ongoing consultations with stakeholders aim to further strengthen the national ID system, making it more impactful in supporting economic growth and modernizing financial services.

Spain’s New Age Verification System

Spain has introduced technical specifications for a new online age verification system aimed at controlling minors’ access to adult content, using W3C Verifiable Credentials (VCs) as the core technology. This approach addresses growing concerns over the negative impact of unrestricted access to adult content on the mental health and social skills of children and teenagers. By implementing W3C VCs, Spain ensures that age verification is conducted securely and privately, without disclosing personal information, thus aligning with GDPR principles. W3C VCs offer unmatched security through advanced cryptographic methods, enhanced privacy by allowing users to share only necessary information, and portability by integrating seamlessly with digital wallets. The system also follows the OpenID For Verifiable Presentations (OpenID4VP) specification, ensuring secure and private verification, and includes a trust management framework to ensure only authorized entities can issue or verify credentials, making it an ideal solution for protecting minors online.

The Digital Travel Credential (DTC)

In the realm of digital identity, numerous digital credentials are vying to replace physical documents, with the European Union’s eIDAS 2.0 and digital driver’s licenses being notable examples. However, none match the Digital Travel Credential (DTC) standard for digital trust, developed by the International Civil Aviation Organization (ICAO), which sets the universal standards for passports. The DTC, designed as the digital equivalent of a passport, offers two types: one created by a user from their physical passport and another issued directly by passport authorities. Indicio and SITA pioneered the implementation of the Type 1 DTC, which is now being adopted by countries and airlines for seamless travel. The DTC’s strength lies in its use of cryptographic verification, ensuring that passport data is securely held on a user’s device without needing to be stored in centralized databases, mitigating risks of data breaches. By scanning their passport, users can verify the authenticity of their data, bind it to their device through biometric checks, and ensure that their digital credentials are trustworthy and tamper-proof. This system provides airlines, airports, and border control with the confidence to streamline travel processes, knowing that the data in the DTC is authenticated, portable, and instantly verifiable.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Stay tuned for future editions of our monthly segment!

This Month in Digital Identity — September Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

POSTECH Adopts Metadium Mainnet-Based Smart Student ID Dear Community, We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology. Here are the unique features that make POSTECH’s smart stu

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

Dear Community,

We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology.

Here are the unique features that make POSTECH’s smart student ID stand out:

Security and Privacy: Students’ personal information is securely protected through the Metadium mainnet, making it impossible to falsify or tamper with user information.

Convenient Use: Using blockchain-based DID authentication, users can manage their personal information and selectively submit information. Additionally, students can easily issue and use mobile student IDs remotely through their smartphones.

Efficient Management: The university can now issue mobile smart student IDs through an online automated process, in addition to plastic student IDs, enabling more efficient workflow improvements.

This case at POSTECH is an excellent example of how blockchain technology can be applied to make our lives more convenient. Our Metadium team will continue to strive for more universities and institutions to use Metadium’s technology.

We are truly grateful for the unwavering interest and support from the Metadium community. We eagerly look forward to your continued support.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

기쁜 소식이 있습니다. 포항공과대학(포스텍)이 메타디움의 메인넷을 기반으로 한 블록체인 스마트 학생증을 채택했습니다. 이는 메타디움 기술의 우수성과 안정성을 입증하는 중요한 성과입니다.

포항공과대학 스마트 학생증의 주요 특징은 다음과 같습니다. 안전성 및 개인정보 보호: 메타디움 메인넷을 통해 학생들의 개인정보가 안전하게 보호되어 사용자 정보의 위, 변조가 불가합니다. 편리한 사용: 블록체인 기반의 DID인증을 적용함으로써 사용자 스스로 개인정보를 관리할 수 있고 정보의 선택적 제출이 가능해집니다. 또한 비대면으로 모바일 학생증을 발급할 수 있게 됩니다. 또한 학생들은 스마트폰을 통해 비대면으로 간편하게 모바일 학생증을 발급받고 사용할 수 있게 됩니다. 효율적인 관리: 대학 측에서는 플라스틱 학생증과 별도로 스마트학생증을 온라인 자동화 업무 프로세스로 발급할 수 있게 되어 효율적 업무 개선이 가능합니다.

이번 포항공과대학의 사례는 블록체인 기술이 우리의 생활을 어떻게 더 편리하게 만드는데에 적용될 수 있는지를 보여주는 좋은 예시입니다. 저희 메타디움 팀은 앞으로도 더 많은 대학과 기관에서 메타디움의 기술을 사용할 수 있도록 노력하겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다.

감사합니다.

-메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 01. September 2024

KuppingerCole

Generative AI in SOAR: Balancing Innovation and Responsibility

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc. In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructur

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc.

In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructures, particularly SOARs. The conversation covers the role of generative AI in changing the daily tasks of cybersecurity professionals, the challenges of integrating generative AI into SOAR platforms, the importance of prompt engineering, and the need for a balanced approach to innovation and accountability. It also addresses the security and ethical considerations of using AI in cybersecurity and the general impact of generative AI on different industries.



Thursday, 29. August 2024

Spruce Systems

Why the U.S. Post Office is Key to Fighting AI Fraud

Pending legislation could transform the venerable USPS into a key player in the fight against fraud.

For years now, the United States Postal Service has been struggling to adjust to the digital world, as the decline of letter mail has left the agency’s budget in shambles. That’s a threat to the Postal Service’s role in connecting all Americans.

Fortunately, a bill under consideration in the U.S. Senate, the POST ID Act, would reinvigorate the venerable service for a new era, help improve USPS’s budget woes – and make it a powerful asset for digital security. The bill proposes using physical Post Office locations to offer real-world identity verification – verification that would, in turn, help fight fraud and disinformation online

That’s similar to the way DMV locations in states like California issue both traditional and digital driver’s licenses. But the Post Office could play a much broader role: the bill’s bipartisan sponsors, Bill Cassidy (R-LA) and Ron Wyden (D-OR), want to allow the Post Office to perform identity verifications for an array of private clients, in addition to public sector agencies it already serves. Combined with some product strategy, this new paid service could help to balance the agency’s budget as well.

This new USPS service would be an extension of the agency’s longtime work connecting people against all obstacles. Instead of refusing to stop for “snow nor rain nor heat nor gloom of night,” this new Postal Service would also be tasked with helping overcome hackers.

A Physical Network for the Digital Age

Senator Wyden was absolutely spot-on when he said that “AI deepfakes have added a whole new challenge for the most common [online identity] verification methods. The best way to confirm who someone is, is in-person verification.”

Wyden’s warning came in October of last year, and the threat of AI has only become more obvious since then. That includes a recent report that artificial intelligence was being used to create convincing fake ID cards at an unprecedented scale, and the equally concerning evolution of deepfake tools into the realm of video, allowing convincing live impersonation online.

But those tricks don’t work in the physical world. Only a real, natural human can walk up to the counter at a Post Office and seek identity verification by a fellow human. Not just physical appearance, but also biometrics like fingerprints are much harder to fake in person than online.

There are very few entities of any sort better positioned to conduct that affirmation than the U.S. Post Office. The USPS has a staggering 31,123 locations across practically every corner of America - even without including locations operated under contract. Post Offices can be found in far-flung U.S. territories like Guam, or at the far northern edge of Alaska, guaranteeing new verification services can be accessed by very nearly every American.

Once an identity is verified in person, it can be digitally recorded using new digital identity credential technology that is extremely trustworthy and secure—and even lets users verify their humanness without revealing their identity.

The Power of Cryptography

The Cassidy-Wyden bill would give the USPS new responsibilities for verifying natural humans, and the ability to serve an array of clients would create a new stream of revenue for the agency. Those verifications would then need to be represented as a trustworthy “digital credential” for users to present online. Luckily, such systems already exist, for instance, in the form of the digital driver’s license offered in California and a growing list of other states.

Trustworthy digital credentials rely on a mix of innovative encryption and widely available hardware – specifically, your mobile phone. In broad outline, a credential issuer like the DMV or Post Office would have a unique digital ‘signature’ tied to a secure computer on-site. After conducting identity verification, the USPS office would digitally sign a credential using the “secure element” chip in the recipient’s mobile phone. This credential could then be presented in a variety of contexts to help a user prove their identity.

The details of the “identity” that a user wants to prove can vary widely, and digital credentials of this sort are very flexible. A common feature of digital credentials is what’s known as “selective disclosure,” which lets a credential holder share only the minimum required information in a particular interaction. 

At its most minimal, a digital credential issued by the USPS could prove only that the holder is a real human being without disclosing any other identifying data. As laid out in a recent research paper by a coalition including researchers from SpruceID, this simple “personhood credential” could be a key element in the fight against costly identity fraud and toxic disinformation online.

Expanding the Network of Trust

The incredible omnipresence of USPS locations makes it an ideal candidate, alongside DMVs, to lead the charge for in-person identity verification and issuance. We can still think bigger, though.

Other trusted entities might be brought into the in-person verification network, expanding access and convenience even further. Candidates might include other shippers, such as UPS and FedEx, who have extensive physical networks and address and other data that can help confirm identities. In the most rural or remote parts of America, retailers might be recruited to the network, though they would require significant additional equipment and training. One benefit of allowing certified private sector participants to also provide in-person identity verification is to keep costs low for users and businesses, while incentivizing competition and innovation.

Over time, the identity verification process would also be streamlined for efficiency and convenience. One major potential efficiency would be collecting an applicant’s data online before an in-person verification session, reducing wait times and workloads. Streamlining of this sort would be particularly important since some digitally signed credentials need to be refreshed more often than conventional physical identity documents.

Offering identity verification via Post Office locations would be part of a yet more expansive system of verifications built on a shared standard for data formats, security practices, and privacy measures. The larger system that SpruceID is helping drive forward is flexible, offering various options for credential holders to choose what data they share.

But perhaps the most important yet challenging feature of this emerging system is creating broad access to in-person verification. For that, the good old Post Office will be hard to beat.

To learn more about SpruceID and our approach to fighting AI fraud, visit our website.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


liminal (was OWI)

Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™

With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By […] The pos
With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By delivering only the most pertinent market signals, it allows you to efficiently spot trends and seize new opportunities. This guide will show you how to use the Market Monitor to tailor insights to your needs, ensuring you’re always a step ahead. Step 1: Accessing the Market Monitor™ From the Dashboard: Navigate to your Link’s dashboard. Look for the Market Monitor widget, which displays recent headlines from your top monitors. Click on the widget to be taken directly to the Monitors Page. Using the Left Navigation Menu: In the platform’s main interface, locate the “Market Monitor” link in the left-hand navigation menu. Click on it to access the Monitors Page. Step 2: Setting Up Your Tailored Monitors On the Monitors Page, you’ll find a list of pre-configured monitors that align with your industry interests, such as “Emerging Technologies,” “Competitive Landscape,” or “Market Trends.” Click the “create new monitor” button to create a new monitor that meets your specific needs. Here, you can specify companies, sectors, themes, keywords, and more to tailor your monitor’s focus. Step 3: Exploring and Curating Insights Opening a Monitor: Click “Open Monitor” on any monitor card you’ve created. You’ll be directed to the Monitor Detail Page, where a curated newsfeed offers real-time insights filtered by your set criteria. Interacting with Curated Content: Scroll through the newsfeed to browse relevant articles and updates. Click on any article to open it in the reading pane, where you can explore the details. Use the filter bar at the top of the page to further refine the content within your monitor, ensuring you see only the most relevant insights. Step 4: Leveraging Expert-in-the-Loop AI for Personalized Insights The Link Market Monitor utilizes expert-in-the-loop AI technology, which combines real-time data with expert analysis to deliver personalized insights. As you interact with the monitors, the AI engine continuously learns from your preferences, fine-tuning the content it delivers to ensure it remains highly relevant to your needs. Step 5: Receiving Real-Time Alerts and Updates Set up real-time alerts to stay informed without the noise. The Market Monitor’s AI engine filters out irrelevant information, sending you only the most pertinent updates. Customize your alerts to focus on key trends, opportunities, and competitive threats, ensuring you never miss a critical development in your industry. Step 6: Sharing Insights with Your Team Collaborating on Strategies: Use the shared monitors to collaborate effectively, ensuring your team is aligned with the latest market intelligence and ready to make informed decisions.

Best Practices:

Regularly Update Your Monitors: As your business goals evolve, update your monitors to reflect new priorities and market conditions. Maximize AI Insights: Leverage the expert-in-the-loop AI to refine and improve the relevance of your insights continuously. Focus on What Matters: Use the real-time signals to stay on top of key developments, allowing you to react swiftly to market changes.

Why the Market Monitor™ is Essential for Business Leaders

Proactive Decision-Making: The Market Monitor™ equips you with the most relevant insights, empowering you to stay ahead of market trends and shifts. By providing timely, actionable information, it allows you to anticipate changes and make decisions that drive your organization forward. Enhanced Strategic Focus: As an business leader, focusing on what truly matters is crucial. The Market Monitor™ filters out irrelevant data and surfaces only the most pertinent signals, ensuring your strategic decisions are based on insights that directly impact your business objectives. Continuous Adaptation: The expert-in-the-loop AI technology behind the Market Monitor™ ensures that the insights you receive are always aligned with current market conditions. As your business environment evolves, the Market Monitor™ adapts to provide you with up-to-date, relevant information, helping you stay agile in a competitive landscape. Collaborative Insight Sharing: Effective leadership involves ensuring your entire team is aligned with the latest intelligence. The Market Monitor™ facilitates seamless collaboration by allowing you to share tailored insights across your organization, enabling informed, unified decision-making. Strategic Empowerment: In a complex and fast-paced industry, having the right information at the right time is crucial. The Market Monitor™ empowers you with the knowledge and tools needed to navigate market complexities confidently, helping you lead your organization to sustained success.

The post Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™ appeared first on Liminal.co.


Spherical Cow Consulting

Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities

Privacy-Enhancing Technologies (PETs) are essential for safeguarding digital identities amidst increasing data breaches. They encompass tools like zero-knowledge proofs and advanced biometrics to secure both human and non-human identities in the digital space. As digital identity expands to include non-human entities, PETs are vital for ensuring privacy and security. Zero-knowledge proofs (ZKPs) e

I want to talk about PETs. No, not about my cats (though they are awesome), but about Privacy-Enhancing Technologies.

Not a day goes by without learning about another data breach that is exposing critical details about people and things online. Enter Privacy-Enhancing Technologies (PETs)—a critical component in digital security. These tools, like zero-knowledge proofs and advanced biometrics, are designed to safeguard digital identities while allowing people and things to get work done.

The rise of privacy-enhancing technologies (PETs) like zero-knowledge proofs and advanced biometrics is reshaping how we think about and manage digital identity. But what’s driving this change, and why should it matter to you, whether you’re managing user access or overseeing countless processes and APIs in the cloud?

All Identities Need PETs

Digital identity isn’t just about people anymore. Sure, your personal online identity—how you log in, interact, and transact—remains essential. But increasingly, digital identity also includes non-human entities like software processes, APIs, and entire cloud workloads. These non-human identities need the same attention to security and privacy as human ones, especially as they become more central to how businesses operate.

When I first started thinking about digital identity, it was all about ensuring the right people had access to the right resources. Today, though, we’re dealing with identities that aren’t people at all—identities that exist in the cloud, managing everything from payroll to AI model training, often without any direct human oversight or even a human-like credential. And these identities need to be just as secure, if not more so, given the scale and complexity they operate within.

Human and Non-Human Considerations

Biometrics like facial recognition and fingerprint scanning have long been used to verify human identities. There’s a lot of work in the field of biometrics, especially with concerns about deepfakes making Ye Olde Fashioned liveness detection hardly a thing. But what about non-human identities? While biometrics might not apply directly, the principles of unique identification and secure access certainly do. For instance, in a cloud environment, processes and APIs need to be uniquely identified and authorized—much like a person—but with a focus on speed, scalability, and automation.

So, two challenges: ensuring that human identities are securely managed while also creating systems that can handle the massive scale of non-human identities. Whether it’s a government-issued digital credential or a cloud-based process, the goal is the same: secure, reliable, and privacy-respecting identity management.

Addressing Privacy Concerns with Digital Credentials

Governments are moving towards digital credentials to improve security and convenience. But this shift brings new privacy challenges. For humans, the way these credentials are issued and managed has significant implications for personal privacy. PETs like zero-knowledge proofs are becoming crucial to ensure that sensitive information remains private, even when it’s used to prove identity.

For non-human identities, the concerns are different but equally important. In cloud environments, digital credentials need to be robust enough to manage the complex interactions between countless processes and APIs, all while maintaining strict access controls and minimizing the risk of breaches.

Of course, if it was easy, I wouldn’t be writing about it. Standards organizations like the IETF are trying to define what a credential should look like in a scenario where it may or may not be for a person (that’s work in SPICE). They’re also trying to define the best way to move those credentials around from one cloud service to the next, given those cloud services don’t exactly speak the same languages (that’s work in WIMSE). And these days we can’t have those conversations without considering the privacy implications of all of it.

Zero-Knowledge Proofs: PETs for All Identities

Which takes us to an area I find fascinating: Zero-Knowledge Proofs (ZKPs). ZKPs are a game-changer for both human and non-human identities. They allow for the verification of information without revealing the underlying data, making them perfect for situations where privacy is paramount. To put it another way, a ZKP will tell you that the proof is true without actually exposing any of the data that is included in the proof.  “Is this mobile driver’s license valid” becomes a question that can be answered without exposing any of the data in the mDL. It’s magic, I tell you, pure magic. (And math. Lots and lots of math.)

In the human world, this might mean you will be able to prove your identity without exposing personal details. In the non-human world, ZKPs can help secure interactions between cloud processes, ensuring that only authorized entities can access sensitive data or perform critical operations. This approach not only protects individual privacy but also bolsters the security of complex digital ecosystems.

Why aren’t ZKPs widely deployed? Because the math involved is incredible, and not all devices can actually handle the necessary computations in the time people expect their web pages to load or their APIs to run. But that’s today; tomorrow is going to be an entirely different story as hardware improves.

Visiting the PETs Shop

Technology is at the heart of these advances. From cryptography to AI, new tools are making it possible to protect digital identities against a range of threats. But with great power comes great responsibility. Whether it’s human users at risk from phishing attacks or non-human processes vulnerable to security breaches, there will never be a point where security and privacy are guaranteed. Innovation will always be necessary to get ahead of bad actors.

For human identities, this might mean adopting stronger authentication methods. For non-human identities, it could involve developing more sophisticated ways to manage and secure API interactions across multiple cloud environments. The challenge is ensuring that these technologies are both effective and adaptable, capable of protecting identities at scale.

PETs Need to be Everywhere

As digital identity continues to evolve, the line between human and non-human identities will blur further. In commerce, for example, digital identities—whether of customers or the processes serving them—are becoming central to every transaction. The transactions may trigger any number of APIs and services that go far beyond a single person’s digital identity. And since all problems have not been solved, businesses are going to have to support the innovation necessary to keep their data safe.

Wrap Up – Loving Your PETs

The future of digital identity is definitely not boring! PETs play a crucial role in shaping how we protect digital identities and are definitely worthy of some focused attention. It’s not the only piece of the puzzle in keeping our data safe, but it’s a biggy.

For tech leaders, I’m afraid you have another area of technology you need to keep on your radar. Your organization must engage in shaping privacy-enhancing digital identity solutions. Don’t just install them, think about how they meet tomorrow’s requirements. Better yet, be a part of defining tomorrow’s requirements in the standards being developed today.

For individual contributors like me, it’s crucial to stay informed. Keep up with the latest security practices, and be on the lookout for open calls for comments on the standards that impact this space. Your voice matters in shaping the standards and regulations in this space.

And if keeping track of all this sounds overwhelming, why not let someone else do the heavy lifting? Reach out to me; let’s chat about how I can help by providing regular updates and insights, tailored to your needs. You don’t have to do this alone.

The post Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities appeared first on Spherical Cow Consulting.


IDnow

AML compliance in 2024: Assessing the effectiveness of AMLD6 and EU’s new AML package.

We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe.  Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states.  […]
We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe. 

Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states. 

In the space of 30 years, six dedicated Anti-Money Laundering Directives (AMLD) have been issued. The first was mainly aimed at combating drug-related offences and introduced the first KYC provisions. The 4th and 5th Directives (AMLD4 & AMLD5) brought in increased transparency obligations, including access to beneficial ownership registers and strengthening controls on virtual currency transactions. With each new iteration, the scope of protection has expanded significantly and now covers many areas, ranging from art dealing to cryptocurrency trading.  

A major development to AML controls came in May 2024 with the release of the AML package, a set of legislative proposals aimed at strengthening the EU’s AML/CFT rules. The AML package aims to close regulatory gaps, strengthen cooperation between member states and ensure uniform application of the rules across the EU.

The AML package is well on its way to become a comprehensive model for the banking industry. It offers uniformity and efficient applications of AML requirements, and the combined rule sets cover top-level economic decision making all the way to daily life for individuals. However, the legislation and regulations are often tinged with a somewhat negative reputation as their final form can stifle innovation, rather than protecting the people they claim to serve. 

Analysts and pundits commend the EU for its outreach to seek input and collaboration for new legislation, but final forms of initiatives rarely resemble the spirit in which they began. This is exemplified in the Draghi Report of September 2024 that discusses European competitiveness.

As the AML package is being finalized, there is still the opportunity for strong private sector collaboration. If done right, this brings Europe close to ‘digital first’ solutions that are standardized, scalable and competitive on a global scale.

Rayissa Armata, Director of Global Regulatory and Government Affairs at IDnow.

“This would better ensure a more level playing field for both traditional services alongside rapidly growing industries such as crypto, blockchain, and digital identity verification processes based on more secure frameworks. If such points are harmonized and implemented properly, Europe has a strong chance to be a leader in the next phase of development in the digital economy,” adds Rayissa.

Here, we explore some of the new rules and consider the effect it may have on AMLD6 and the future of compliance in Europe. 

5 new changes to AML rules and regulations in 2024.  A new European Anti-Money Laundering Authority (AMLA) has been established and will be operational in Frankfurt from 2025. With a staff of 400, it will centralize anti-money laundering efforts, coordinate national authorities and conduct cross-border investigations.  A directive which will further tighten criminal provisions and procedures that need to be adopted by member states to improve the AML/CFT regime. A regulation that will introduce harmonised rules that will be directly applicable as a regulation to combat money laundering and terrorist financing across all EU member states. Crypto-asset service providers will now be required to collect and store information on the source and beneficiary of the funds for each transaction. This rule, known as the “travel rule”, already exists in traditional finance and requires that information on the source of the asset and its beneficiary travels with the transaction and is stored on both sides of the transfer. CASPs will be obliged to provide this information to competent authorities if an investigation is conducted into money laundering and terrorist financing. This means that businesses operating in these spaces must adopt harmonized verification standards, aligning with those used by traditional financial institutions. A directive on Access to Centralized Bank Account Registers: This directive makes information from centralized bank registers available to member states. This contains data relating to the identity and location of bank account holders – through a single access point.  Regulations, directives and AMLD6 changes.

It’s important to note that there is an Anti-Money Laundering Regulation (AMLR) and Anti-Money Laundering Directives.

AMLR focuses more on regulatory and supervisory mechanisms, while directives, such as AMLD6 enhances the criminal law framework for tackling money laundering. Together, these laws are designed to increase financial transparency, make it harder to use the financial system for illicit purposes, and ensure that there is greater accountability for both individuals and legal entities involved in money laundering.

The AMLR provides a uniform set of standards directly applicable across the EU, ensuring consistency in financial and compliance procedures. AMLD6, however, allows member states some flexibility in how they apply criminal sanctions and enforcement measures, provided they align with the directive’s goals. Together, AMLR and AMLD6 form a cohesive framework within the AML Package.

AMLD6, which came into force in December 2020, has introduced several new legal provisions and expanded the list of criminal offences related to money laundering. Faced with the diversification of money laundering schemes, it now includes offences that go beyond simple financial crime. There are now 22 additional offences, including environmental crimes, tax crimes and cybercrime.  

AMLD6 also encourages member states to prosecute “facilitators” who help to carry out illegal activities. How member states should prosecute is also being revised and AMLD6 seeks to improve the deterrent effect of existing legislation by imposing tougher penalties. EU member states are now required to impose prison sentences of at least four years for serious money laundering offences, with heavier penalties for repeat offenders. Significant financial penalties are also issued (up to €5 million for individuals), to deprive the culprits of any profit derived from illicit activities. 

Another major development is the expansion of who should be held responsible for money laundering. From now on, legal entities could be liable for money laundering offences committed by their employees. Companies may also be subject to severe penalties, which could result in the company’s closure. Executives may also be held liable for money laundering offences committed within their organization as part of the EU’s plan to adopt “effective, proportionate and dissuasive criminal sanctions“.  

Recognizing the transnational challenges posed by organized crime and money laundering, AMLD6 promotes a rapid and effective exchange of information on suspicious transactions and ongoing investigations, as well as enhanced legal assistance in the collection of evidence and freezing of assets. It also promotes cooperation with specialized European agencies, such as Europol and Eurojust to facilitate the coordination of cross-border investigations. 

Finally, the legislation contains enhanced due diligence provisions for wealthy individuals with assets of more than €50 million, excluding their main residence, as well as an EU-wide limit of €10,000 for cash payments. 

The future of AML compliance. 

The implementation of AMLD6 has significant implications for businesses and financial institutions. Companies will now be required to protect themselves against compliance risks and adopt appropriate control mechanisms and systems, conduct regular audits, and raise awareness among their employees. This includes investing in advanced transaction monitoring and analysis technologies to proactively detect suspicious financial activity. These actions are necessary to protect the integrity of the company, avoid severe penalties, and maintain stakeholder trust. 

In addition, many industries that were not previously required to comply with certain AML regulations will now need to be more transparent with their transactions. For example, from 2029, top-tier professional football clubs involved in large-scale financial transactions, whether with sponsors, advertisers or in the context of player transfers, will have to comply with certain KYC rules. Like the financial sector, football clubs will have to verify the identity of their customers, monitor transactions and report any suspicious transactions to the FIUs. 

As money laundering and terrorist financing is a global problem, measures adopted at EU level must be coordinated with international measures otherwise they will have a very limited effect. The European Union must therefore continue to consider the recommendations of the Financial Action Task Force (FATF) and other international bodies active in AML/CFT. 

The new package of AML rules has now been entered into the EU’s Official Journal, which means that companies will have up to two years to implement some measures and three years for others.  

Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now

By

Mallaury Marie
Content Manager at IDnow
Connect with Mallaury on LinkedIn